[{"content":"Write your post introduction here. This text appears on the homepage as an excerpt.\nMaking Your macOS Terminal Beautiful A step-by-step guide to setting up iTerm2, Oh My Zsh, and Powerlevel10k\nWhat We\u0026rsquo;re Building By the end of this guide, you\u0026rsquo;ll have a terminal that looks great, works smart, and feels just as powerful as Windows Terminal — with tabs, profiles, autosuggestions, and a stunning prompt.\nThe stack:\niTerm2 — a better terminal emulator Oh My Zsh — a shell framework Powerlevel10k — a beautiful, informative prompt theme MesloLGS NF — a font with icons and symbols zsh-autosuggestions + zsh-syntax-highlighting — quality-of-life plugins Step 1: Install iTerm2 Go to https://iterm2.com and download the latest version. Drag it to your Applications folder and open it.\nThis is how iTerm2 looks immediately after installation: a clean default terminal window ready for theming. Step 2: Install Homebrew (if you don\u0026rsquo;t have it yet) Homebrew is the package manager for macOS. If you don\u0026rsquo;t have it yet, paste this in your terminal:\n/bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; Follow the prompts. It may ask for your password and install Xcode Command Line Tools — that\u0026rsquo;s normal.\nVerify it works:\nbrew --version Step 3: Install Oh My Zsh Paste this into your terminal:\nsh -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\u0026#34; It will ask if you want to change your default shell to zsh — say yes. After it finishes, your terminal will already look a little different.\nAfter installing Oh My Zsh, the shell shows its welcome banner and a refreshed prompt with the new zsh environment.\n![iTerm2 with Oh My Zsh welcome banner and updated prompt](/images/2026-04-05/iterm2_zsh.png) --- Step 4: Install the MesloLGS NF Font Powerlevel10k needs a special font to render icons and symbols correctly. Install it via Homebrew:\nbrew install --cask font-meslo-lg-nerd-font Then set it in iTerm2:\nOpen iTerm2 → Settings (or Cmd+,) Go to Profiles → Text Click the font dropdown and search for MesloLGS NF Set the size to something comfortable, like 13 or 14 The font settings panel in iTerm2 with MesloLGS NF selected, which enables Powerlevel10k icons and glyphs. Step 5: Install Powerlevel10k git clone --depth=1 https://github.com/romkatv/powerlevel10k.git \\ ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k Now open your zsh config file:\nnano ~/.zshrc Find the line that says:\nZSH_THEME=\u0026#34;robbyrussell\u0026#34; And change it to:\nZSH_THEME=\u0026#34;powerlevel10k/powerlevel10k\u0026#34; Save with Ctrl+O, then Enter, then exit with Ctrl+X.\nApply the changes:\nsource ~/.zshrc The Powerlevel10k configuration wizard will launch automatically. Follow the prompts — it will ask about icons, prompt style, colors, etc. Take your time, you can always re-run it later with p10k configure.\nThe Powerlevel10k configuration wizard in action, letting you choose icons, prompt style, and colors interactively. This is the finished Powerlevel10k prompt, showing a compact, informative command line with git status and icons. Step 6: Install the Plugins zsh-autosuggestions This shows ghost-text suggestions based on your history as you type. Press → to accept a suggestion.\ngit clone https://github.com/zsh-users/zsh-autosuggestions \\ ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions zsh-syntax-highlighting This colors your commands green (valid) or red (invalid) as you type them — before you even hit Enter.\ngit clone https://github.com/zsh-users/zsh-syntax-highlighting.git \\ ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting Now enable both plugins. Open your config again:\nnano ~/.zshrc Find the line that says:\nplugins=(git) And change it to:\nplugins=(git zsh-autosuggestions zsh-syntax-highlighting) Save and apply:\nsource ~/.zshrc zsh-autosuggestions and zsh-syntax-highlighting in action: grey suggestion text for history completions and color-coded command validity.\nStep 7: Set Up Tabs and Profiles in iTerm2 This is where iTerm2 really shines over the default Terminal app. You can create different profiles for different environments — just like Windows Terminal.\nCreate a new profile Open iTerm2 → Settings → Profiles Click the + button at the bottom left Name it something like Local or Work Customize its color scheme under the Colors tab — try one of the presets like Solarized Dark or Tango Dark Open a new tab with a specific profile Press Cmd+T to open a new tab Or go to Profiles in the menu bar and click the profile you want A polished iTerm2 window with multiple tabs open so you can switch between shells, SSH sessions, or containers quickly.\nStep 8 (Optional): Connect to a Remote Server or Docker Container Just like Windows Terminal lets you open WSL alongside PowerShell, you can have a local shell in one tab and a remote environment in another.\nSSH into a server:\nssh user@your-server-ip Shell into a running Docker container:\ndocker exec -it your-container-name /bin/bash Just open these in a new tab (Cmd+T) and you\u0026rsquo;ve got multiple environments in one window.\nThe End Result You now have a terminal that:\n✅ Has tabs for multiple sessions ✅ Shows git branch and status in your prompt ✅ Autocompletes commands from your history ✅ Color-codes commands before you run them ✅ Supports multiple environments (local, SSH, Docker) in one window ✅ Looks 🔥 Bonus: Making It Look Great in VSCode Too If you open the integrated terminal in VSCode, you might notice the icons and prompt look broken or garbled. This is because VSCode uses its own font setting for the terminal, separate from iTerm2.\nThe fix is simple — just point VSCode to the same font.\nSet the font in VSCode Open VSCode → Settings (Cmd+,) Search for terminal font Find the setting called Terminal › Integrated: Font Family Set it to: MesloLGS NF The VSCode terminal font setting updated to MesloLGS NF, ensuring the integrated terminal displays Powerlevel10k icons correctly. Alternatively, you can add it directly to your settings.json:\n{ \u0026#34;terminal.integrated.fontFamily\u0026#34;: \u0026#34;MesloLGS NF\u0026#34; } To open settings.json directly, press Cmd+Shift+P, type Open User Settings JSON and hit Enter.\nThat\u0026rsquo;s it — your VSCode terminal will now render the same icons, colors, and prompt as iTerm2.\nUseful Shortcuts to Remember Shortcut Action Cmd+T New tab Cmd+D Split pane vertically Cmd+Shift+D Split pane horizontally Cmd+] / Cmd+[ Switch between panes Cmd+Number Jump to tab by number → arrow Accept autosuggestion p10k configure Re-run the Powerlevel10k wizard ","permalink":"https://robzah.com/posts/2026-04-05-make-your-macos-terminal-beautiful/","summary":"\u003cp\u003eWrite your post introduction here. This text appears on the homepage as an excerpt.\u003c/p\u003e\n\u003c!-- more --\u003e\n\u003ch1 id=\"making-your-macos-terminal-beautiful\"\u003eMaking Your macOS Terminal Beautiful\u003c/h1\u003e\n\u003cp\u003e\u003cem\u003eA step-by-step guide to setting up iTerm2, Oh My Zsh, and Powerlevel10k\u003c/em\u003e\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"what-were-building\"\u003eWhat We\u0026rsquo;re Building\u003c/h2\u003e\n\u003cp\u003eBy the end of this guide, you\u0026rsquo;ll have a terminal that looks great, works smart, and feels just as powerful as Windows Terminal — with tabs, profiles, autosuggestions, and a stunning prompt.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eThe stack:\u003c/strong\u003e\u003c/p\u003e","title":"Make Your macOS Terminal Beautiful"},{"content":"Tired of manually configuring cloud resources for your Hugo blog on AWS? Infrastructure as Code (IaC) offers a better way to manage, version, and deploy your infrastructure reliably. In this post, I\u0026rsquo;ll explain what IaC is, compare the major tools available, and share why I chose Pulumi for my setup — complete with solutions for tricky scenarios like using a .nl domain purchased outside Route 53.\nManaging cloud infrastructure by hand is fun — right up until you forget which S3 bucket setting you changed, why CloudFront is suddenly caching the wrong thing, or which Lambda function you tweaked at 11 PM. Infrastructure as Code (IaC) solves all of that: your infrastructure becomes version-controlled, repeatable, and reviewable just like your application code.\nIn this post I\u0026rsquo;ll walk through what IaC is, which tools are available, why I landed on Pulumi for my Hugo blog on AWS, and how to handle one tricky real-world problem: using a .nl domain bought outside of Route 53.\nWhat Is Infrastructure as Code? Infrastructure as Code means describing your cloud resources (servers, buckets, DNS records, CDN distributions, functions, …) in files that a tool then applies to your cloud provider. Instead of clicking through the AWS console, you write a definition and let the tool figure out what needs to be created, changed, or deleted.\nThe benefits are concrete:\nReproducibility — rebuild your entire stack in a new account or region with one command. Version control — every change is a commit; roll back is a git revert. Review \u0026amp; collaboration — your infra changes go through the same pull-request process as your code. Auditability — you always know the current desired state of your infrastructure. Drift detection — if someone changes something in the console, the tool tells you. The Main IaC Tools There are several serious options, each with a different philosophy.\nTerraform / OpenTofu Terraform by HashiCorp is the most widely adopted IaC tool. It uses its own declarative language called HCL (HashiCorp Configuration Language). You describe what you want, and Terraform computes a plan to get there.\nPros:\nEnormous community and module ecosystem. Mature, battle-tested, works with every cloud. Clear separation between plan and apply. Cons:\nHCL is a domain-specific language — you need to learn it, and it has real limitations when you need loops, conditionals, or dynamic behaviour. Limited reuse patterns compared to general-purpose languages. HashiCorp\u0026rsquo;s license change led to the community fork OpenTofu, so you now need to decide which one to use. AWS CDK (Cloud Development Kit) AWS CDK lets you write infrastructure in TypeScript, Python, Java, or Go — but it ultimately synthesises to CloudFormation under the hood.\nPros:\nFull programming language support (Python included). High-level \u0026ldquo;constructs\u0026rdquo; that bundle multiple AWS resources together sensibly. First-class AWS support and deep integration. Cons:\nAWS-only. If you ever move even partially off AWS, you start over. CloudFormation underneath means you inherit its quirks, long deploy times, and stack limits. Error messages can be hard to trace through the CDK-to-CFN synthesis layer. Pulumi Pulumi is the new-generation IaC tool built around the idea that infrastructure is software. You write your infra in Python, TypeScript, Go, or .NET — real general-purpose languages with real package managers, testing frameworks, and IDEs.\nPros:\nPython support — if you already know Python, you can start immediately. No DSL to learn; use pip packages, write functions, loops, classes. Multi-cloud from day one (AWS, Azure, GCP, Cloudflare, etc.). Strong state management (Pulumi Cloud, or self-hosted with S3). Excellent AWS support via pulumi-aws. Cons:\nSmaller community than Terraform (though growing fast). The Pulumi Cloud (free tier) stores state remotely — you need to decide where your state lives. Ansible / CloudFormation / SAM These exist but are generally not the right tool for the job described here. CloudFormation is verbose JSON/YAML with sharp edges. Ansible is great for configuration management but awkward for cloud resource provisioning. AWS SAM is purpose-built for serverless but limited in scope.\nWhy I Chose Pulumi My stack is a Hugo static site on S3, served via CloudFront, with Lambda@Edge functions for custom page handling and Route 53 for DNS. That means I\u0026rsquo;m dealing with:\nAn S3 bucket with specific policies and static website config A CloudFront distribution with custom behaviours, cache policies, and origin access One or more Lambda@Edge functions (which must live in us-east-1) Route 53 hosted zones and records SSL certificates via ACM (also us-east-1 for CloudFront) Here is why Pulumi fits this exactly:\nPython without compromise. I already write Python. With Pulumi I can import pulumi_aws, define a function that creates a CloudFront distribution, pass it a config object, and call it from a loop if I want multiple environments. No HCL, no YAML soup.\nReal logic for real problems. Lambda@Edge functions must be deployed in us-east-1 regardless of where your main stack lives. In Pulumi this is trivial: create an aws.Provider pointing at us-east-1 and pass it to your Lambda resource. In Terraform it requires provider aliasing with extra wiring.\nMulti-provider in one stack. My domain is registered outside AWS (more on this below). Pulumi has providers for Cloudflare, Namecheap-compatible DNS, and many others. I can manage my AWS infrastructure and update DNS records at my external registrar in the same pulumi up run.\nTestability. Because it\u0026rsquo;s Python, I can unit test my infra logic with pytest. Pulumi even has a dedicated testing framework for mocking resource outputs.\nThe Domain Problem: .nl Domains and Route 53 Route 53 does not sell .nl domains. If you want a .nl domain, you need to buy it from a registrar that supports it — popular choices for Dutch domains are:\nHostnet (nl-based, very reliable) TransIP (nl-based, developer-friendly API) Cloudflare Registrar (at-cost pricing, excellent API, no .nl markup) Namecheap The good news is that Route 53 does not need to register your domain to manage DNS for it. The standard pattern is:\nBuy the domain at your preferred registrar (e.g., TransIP or Cloudflare). Create a Route 53 Hosted Zone for your domain. Route 53 gives you four nameservers. Point your domain\u0026rsquo;s nameservers at those four Route 53 nameservers, in your registrar\u0026rsquo;s control panel. Route 53 now handles all DNS for your domain, even though it was registered elsewhere. This is a one-time manual step (updating nameservers at your registrar), but everything after that — A records, AAAA records, CNAME, aliases to CloudFront — is managed in Pulumi via aws.route53.Record.\nSupporting Multiple Domains in Pulumi Because Pulumi is Python, supporting multiple domains is clean. You can define a list of domains and iterate:\nimport pulumi import pulumi_aws as aws config = pulumi.Config() domains = config.require_object(\u0026#34;domains\u0026#34;) # e.g. [\u0026#34;yourblog.nl\u0026#34;, \u0026#34;yourblog.com\u0026#34;] for domain in domains: zone = aws.route53.Zone(f\u0026#34;zone-{domain}\u0026#34;, name=domain) aws.route53.Record( f\u0026#34;alias-{domain}\u0026#34;, zone_id=zone.zone_id, name=domain, type=\u0026#34;A\u0026#34;, aliases=[aws.route53.RecordAliasArgs( name=cloudfront_distribution.domain_name, zone_id=cloudfront_distribution.hosted_zone_id, evaluate_target_health=False, )], ) You can store the domains list in Pulumi.\u0026lt;stack\u0026gt;.yaml or pass it as a secret, and Pulumi handles the rest. When you add a new domain, you add it to the list, run pulumi up, and get a new hosted zone with its nameserver records output — ready to paste into your external registrar.\nSSL Certificates for Multiple Domains CloudFront requires an ACM certificate in us-east-1. To cover multiple domains, you request a single certificate with Subject Alternative Names (SANs):\nus_east_provider = aws.Provider(\u0026#34;us-east-1\u0026#34;, region=\u0026#34;us-east-1\u0026#34;) cert = aws.acm.Certificate( \u0026#34;blog-cert\u0026#34;, domain_name=domains[0], subject_alternative_names=domains[1:], validation_method=\u0026#34;DNS\u0026#34;, opts=pulumi.ResourceOptions(provider=us_east_provider), ) Pulumi will output the DNS validation records you need to add to each hosted zone — and you can automate that too, creating the validation records automatically for the Route 53-managed domains.\nPutting It Together: The Stack at a Glance Here is the high-level picture of what Pulumi will manage for this blog:\nPulumi stack ├── S3 Bucket (private, OAC-enabled) │ └── Bucket Policy (allow CloudFront OAC only) ├── CloudFront Distribution │ ├── Origin → S3 (via Origin Access Control) │ ├── Cache Behaviours (Hugo pages, assets, API paths) │ └── Lambda@Edge (us-east-1 provider) │ └── viewer-request / origin-request functions ├── ACM Certificate (us-east-1 provider, SAN for all domains) └── Route 53 ├── Hosted Zone per domain ├── A Alias record → CloudFront └── ACM validation records Everything above lives in one Pulumi project, runs with pulumi up, and produces clear output showing which resources were created, changed, or deleted.\nNext Steps In the follow-up posts I will cover:\nSetting up the Pulumi project — project structure, state backend choice (Pulumi Cloud vs S3), and secrets management with pulumi.Config. The S3 + CloudFront module — bucket policies, Origin Access Control, and cache invalidation on deploy. Lambda@Edge in Python — writing, packaging, and deploying edge functions with the us-east-1 provider. Multi-domain DNS — full example with Route 53 hosted zones, ACM validation, and CloudFront aliases. CI/CD integration — running pulumi up from GitHub Actions on every push to main. If you are evaluating IaC tools for a similar AWS-hosted static site, I hope this gives you a clear picture of the landscape. Pulumi\u0026rsquo;s Python-native approach removes the biggest friction point — learning a new language — while giving you the full power of a real programming language to handle the edge cases that every non-trivial infrastructure eventually throws at you.\nReady to automate your Hugo blog\u0026rsquo;s AWS infrastructure? Give Pulumi a try and share your setup in the comments!\n","permalink":"https://robzah.com/posts/2026-04-03-infrastructure-as-code-for-your-hugo-blog-on-aws---why-i-chose-pulumi/","summary":"\u003cp\u003eTired of manually configuring cloud resources for your Hugo blog on AWS? Infrastructure as Code (IaC) offers a better way to manage, version, and deploy your infrastructure reliably. In this post, I\u0026rsquo;ll explain what IaC is, compare the major tools available, and share why I chose Pulumi for my setup — complete with solutions for tricky scenarios like using a .nl domain purchased outside Route 53.\u003c/p\u003e\n\u003c!-- more --\u003e\n\u003cp\u003eManaging cloud infrastructure by hand is fun — right up until you forget which S3 bucket setting you changed, why CloudFront is suddenly caching the wrong thing, or which Lambda function you tweaked at 11 PM. Infrastructure as Code (IaC) solves all of that: your infrastructure becomes version-controlled, repeatable, and reviewable just like your application code.\u003c/p\u003e","title":"Infrastructure as Code for Your Hugo Blog on AWS - Why I Chose Pulumi"},{"content":"A complete step‑by‑step tutorial on automating Hugo deployments to AWS S3 and CloudFront using GitHub Actions. No more manual uploads — just push your changes and let CI/CD handle the rest.\nHow to Auto‑Deploy a Hugo Blog to AWS Using GitHub Actions (Step‑by‑Step Guide) In the previous blog posts we did the following:\nCreated a blog post with Hugo Deployed the Hugo blog to AWS For background, see:\nCreate Your Own Hugo Blog Deploy Hugo to AWS S3 and CloudFront Every time we add a new blog post or make configuration changes, we need to manually build the website and run a script (or the commands manually) to deploy the application to AWS.\nThis works, but it’s not ideal. We want deployments to happen automatically whenever we push new content to GitHub.\nIn this post, we’ll set up a CI/CD pipeline using GitHub Actions that:\nBuilds the Hugo site Uploads it to S3 Invalidates the CloudFront cache Makes your blog live within seconds And we’ll do it step‑by‑step, with no missing pieces.\nPrerequisites Before starting, make sure:\nYour Hugo blog lives in a GitHub repository Your blog is already hosted on: S3 (static website hosting) CloudFront (CDN in front of S3) If you followed my previous posts, you already have this.\nCreating the IAM User (Step‑by‑Step) We need an IAM user that GitHub Actions can use to deploy your site.\nThis user should have minimal permissions.\n1. Create the IAM user In the AWS Console:\nOpen the AWS Console Search for IAM In the left menu, click Users Click Create user Enter a name:\nhugo-deploy-user Click Next Skip adding the user to a group Click Next again Click Create user 2. Create access keys Click the user you just created Go to the Security credentials tab Scroll to Access keys Click Create access key Choose Command Line Interface (CLI) Confirm and continue Copy: Access Key ID Secret Access Key You will need these for GitHub.\nAWS Best Practices (Why We Do It This Way) 🔐 Principle of Least Privilege We only give the IAM user the permissions it absolutely needs:\nUpload files to one S3 bucket Invalidate one CloudFront distribution This protects your AWS account.\n🪪 Separate IAM users for automation Never use your personal IAM user for CI/CD.\nAutomation users should be isolated and revocable.\n🧹 Scoped CloudFront permissions We explicitly restrict invalidations to a single distribution.\n🪣 Avoid wildcard S3 permissions We do NOT allow access to all buckets — only the one hosting your blog.\n🔑 Rotate access keys GitHub Actions supports multiple keys, so rotate them periodically.\nIAM Policies (Copy/Paste Ready) 1. S3 Upload Policy Replace YOUR_BUCKET_NAME:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;AllowS3Upload\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34;, \u0026#34;s3:ListBucket\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:s3:::YOUR_BUCKET_NAME\u0026#34;, \u0026#34;arn:aws:s3:::YOUR_BUCKET_NAME/*\u0026#34; ] } ] } 2. CloudFront Invalidation Policy Replace YOUR_AWS_ACCOUNT_ID and YOUR_DISTRIBUTION_ID:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;AllowSpecificInvalidation\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: \u0026#34;cloudfront:CreateInvalidation\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:cloudfront::YOUR_AWS_ACCOUNT_ID:distribution/YOUR_DISTRIBUTION_ID\u0026#34; } ] } Where to attach these policies Go to IAM → Users Click your user Go to Permissions Click Add permissions Choose Attach policies directly Search for you created policies and check them Save and attach it Adding AWS Credentials to GitHub (Step‑by‑Step) This is where many tutorials skip steps — so here’s the full walkthrough.\n1. Open your GitHub repository Go to:\nhttps://github.com/\u0026lt;your-username\u0026gt;/\u0026lt;your-repo\u0026gt;\n2. Open the repository settings Top menu → Settings\n3. Open Secrets Left menu → Secrets and variables → Actions\n4. Add each secret Click New repository secret for each:\nAWS_ACCESS_KEY_ID → Your IAM access key\nAWS_SECRET_ACCESS_KEY → Your IAM secret key\nAWS_REGION → e.g. eu-west-1\nAWS_S3_BUCKET → Your bucket name\nAWS_CLOUDFRONT_DISTRIBUTION_ID → Your distribution ID\nWhere to find the CloudFront distribution ID AWS Console → CloudFront → Distributions\nLook in the ID column.\nGitHub Actions Workflow Explained GitHub Actions uses a special folder:\n.github/workflows/ GitHub automatically scans this folder for .yml files and runs them.\nThis is why the file must be placed there.\nCreate the workflow file Create:\n.github/workflows/deploy.yml Full workflow with comments name: Deploy Hugo site # Name of the workflow on: push: branches: [ \u0026#34;main\u0026#34; ] # Run this workflow whenever we push to main jobs: deploy: runs-on: ubuntu-latest # GitHub provides a Linux VM for the job steps: # Step 1: Download your repository code into the VM - name: Checkout code uses: actions/checkout@v4 # Step 2: Install Hugo so we can build the site - name: Install Hugo uses: peaceiris/actions-hugo@v3 with: hugo-version: \u0026#39;latest\u0026#39; # Step 3: Build the Hugo site (output goes to /public) - name: Build site run: hugo --minify # Step 4: Configure AWS credentials so the VM can access AWS - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} # Step 5: Upload the built site to S3 - name: Sync to S3 run: aws s3 sync public/ s3://${{ secrets.AWS_S3_BUCKET }} --delete # Step 6: Invalidate CloudFront cache so changes go live immediately - name: Invalidate CloudFront cache run: | aws cloudfront create-invalidation --distribution-id ${{ secrets.AWS_CLOUDFRONT_DISTRIBUTION_ID }} --paths \u0026#34;/*\u0026#34; Deployment Flow Diagram ┌──────────────────────┐ │ GitHub Repository │ └──────────┬───────────┘ │ Push to main ▼ ┌──────────────────────┐ │ GitHub Actions │ │ (Build + Deploy) │ └──────────┬───────────┘ │ Builds Hugo site ▼ ┌──────────────────────┐ │ S3 Bucket │ │ (Static Website) │ └──────────┬───────────┘ │ Upload files ▼ ┌──────────────────────┐ │ CloudFront │ │ (Global CDN Cache) │ └──────────┬───────────┘ │ Invalidate cache ▼ ┌──────────────────────┐ │ Updated Blog │ │ robzah.com │ └──────────────────────┘ Explanation of each section name: This is just a label. It appears in the Github Actions UI.\non: Defines when the workflow runs. In this example it runs everytime it is pushed to the main branch. So when merging from your feature branch to the main branch it will run. Unless when you push directly to main, it will directly run.\njobs: A workflow can have multiple jobs. In this example we only have one (deploy), but you can think of having different jobs, like \u0026ldquo;tests\u0026rdquo;.\nruns-on: Defines the virtual machine type. Github provides multiple different, like:\nubuntu-latest windows-latest macos-latest (ChatGPT tells me that Ubuntu is fastest and cheapest, I did not check this, so take this with a grain of salt).\nsteps: Each step runs in order\nactions/checkout Downloads your code into the VM.\npeaceiris/actions-hugo Installs Hugo.\nhugo \u0026ndash;minify Builds your site into the public/ folder.\naws-actions/configure-aws-credentials Logs into AWS using your GitHub secrets.\naws s3 sync Uploads your site to S3.\naws cloudfront create-invalidation Clears the CDN cache so your new blog post appears instantly.\nFinal Thoughts With this setup:\nYou write a blog post Commit and push GitHub builds and deploys automatically No manual S3 uploads.\nNo CloudFront clicking.\nNo forgetting to rebuild the site.\nJust clean, automated deployments — exactly how a Hugo blog should work.\n","permalink":"https://robzah.com/posts/2026-03-28-how-to-auto-deploy-a-hugo-blog-to-aws-using-github-actions--step-by-step-guide-/","summary":"\u003cp\u003eA complete step‑by‑step tutorial on automating Hugo deployments to AWS S3 and CloudFront using GitHub Actions. No more manual uploads — just push your changes and let CI/CD handle the rest.\u003c/p\u003e\n\u003c!-- more --\u003e\n\u003ch1 id=\"how-to-autodeploy-a-hugo-blog-to-aws-using-github-actions-stepbystep-guide\"\u003eHow to Auto‑Deploy a Hugo Blog to AWS Using GitHub Actions (Step‑by‑Step Guide)\u003c/h1\u003e\n\u003cp\u003eIn the previous blog posts we did the following:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eCreated a blog post with Hugo\u003c/li\u003e\n\u003cli\u003eDeployed the Hugo blog to AWS\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eFor background, see:\u003c/p\u003e","title":"How to Auto‑Deploy a Hugo Blog to AWS Using GitHub Actions (Step‑by‑Step Guide)"},{"content":"Deploy your Hugo blog to AWS S3 and CloudFront for a secure, globally-distributed website. This complete guide covers architecture diagrams, security best practices, and automation to get your blog live with a global CDN.\nPrerequisites This post is the second part of a series on building and hosting a blog with Hugo. Before diving into deployment, make sure you\u0026rsquo;ve completed the first post:\n📖 Create Your Own Hugo Blog - Learn how to install Hugo, set up a site, install PaperMod, create posts, and configure metadata.\nThis post assumes you have a working Hugo blog running locally and are ready to take it live!\nNow that you have a beautiful Hugo blog, it\u0026rsquo;s time to deploy it! In this post, I\u0026rsquo;ll walk you through how to host your Hugo website on AWS S3 and use CloudFront for a blazing-fast CDN. The best part? You can use AWS\u0026rsquo;s free tier to get all of this for free!\nSolution Architecture Overview Before we dive into the technical setup, let\u0026rsquo;s understand what we\u0026rsquo;re building from a solution architecture perspective. This is a serverless static website hosting solution—no servers to manage, auto-scaling included, and highly available globally.\nArchitecture Diagram Users Worldwide | v ┌─────────────┐ │ CloudFront │ │ CDN │ └──────┬──────┘ | ┌────────────────┼────────────────┐ | | | (cached) (cache miss) (pretty URLs) | | | v v v ┌─────────┐ ┌────────────────┐ ┌──────────┐ │ Edge │ │ Origin Access │ │CloudFront│ │Locations│ │ Control │ │Functions │ │(99 PoP) │ │ (OAC) │ │(URL rewrite) └─────────┘ └────────┬───────┘ └──────────┘ | v ┌─────────────────┐ │ S3 Bucket │ │ (Private) │ │ ┌─────────────┐ │ │ │ index.html │ │ │ │ posts/ ... │ │ │ │ css/js/ ... │ │ │ └─────────────┘ │ └─────────────────┘ ^ | (upload via) | ┌─────────────────┐ │ Your Computer │ │ (AWS CLI) │ │ (IAM User) │ └─────────────────┘ What We\u0026rsquo;re Building This architecture solves several problems:\nComponent Purpose Benefit Hugo (Static Generator) Converts Markdown files to HTML Fast, secure, version-controllable content S3 Bucket Stores static HTML/CSS/JS files Cheap, durable, scalable content repository CloudFront CDN Caches and serves from 100+ edge locations worldwide Lightning-fast load times globally, built-in DDoS protection Origin Access Control (OAC) Restricts S3 access to CloudFront only No public bucket access, maximum security CloudFront Functions Rewrites URLs (/posts/my-post/ → /posts/my-post/index.html) Pretty URLs without server-side logic IAM User (Least Privilege) Credentials to deploy updates Minimal attack surface, controlled permissions Data Flow Development: You write Markdown posts locally Build: Hugo generates static HTML files in public/ Deploy: AWS CLI uploads files to private S3 bucket Cache Invalidation: CloudFront cache refreshed to serve latest content Edge Delivery: Users download from nearest CloudFront edge location Pretty URLs: CloudFront function rewrites requests on the fly Why This Architecture? Cost: Free tier covers small blogs; minimal cost for traffic Performance: Global CDN with ~99 edge locations = \u0026lt;100ms latency worldwide Security: No database, no server exploitation risks, HTTPS enforced Reliability: S3 durability is 99.999999999%, CloudFront has redundancy Scalability: Auto-scales to millions of requests without configuration Maintenance: No patching, no server management, fully managed services Setting up AWS First, you\u0026rsquo;ll need an AWS account. Head over to AWS and create one if you don\u0026rsquo;t have it already. AWS gives you a free tier for the first 12 months, which includes S3 storage and CloudFront bandwidth.\nCreating an S3 Bucket Log into the AWS Console and navigate to S3. Create a new bucket:\nClick \u0026ldquo;Create bucket\u0026rdquo; Give it a name (e.g., my-awesome-blog). Bucket names must be unique across AWS, so choose something specific to you. Keep \u0026ldquo;Block all public access\u0026rdquo; enabled (this is the default and is secure) Enable \u0026ldquo;Bucket Versioning\u0026rdquo; (allows recovery if something goes wrong) Click \u0026ldquo;Create\u0026rdquo; Your S3 bucket is private by default—only CloudFront (via Origin Access Control) can read from it. This is the correct security posture.\nImportant: Do NOT make your bucket public. All traffic should flow through CloudFront.\nCreating an IAM User Instead of using your main AWS account credentials (which is a security risk), create an IAM user with limited permissions only for your specific bucket and CloudFront distribution. Follow the principle of least privilege—give it only what it needs.\nGo to IAM in the AWS Console Click \u0026ldquo;Users\u0026rdquo; and then \u0026ldquo;Add user\u0026rdquo; Give it a name (e.g., hugo-deployer) Select \u0026ldquo;Access key - Programmatic access\u0026rdquo; Skip attaching managed policies and instead create a custom policy (see below) Click through and create the user Creating a Custom Policy After creating the user, attach this custom policy (replace your-bucket-name and DISTRIBUTION-ID with your actual values):\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34;, \u0026#34;s3:ListBucket\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:s3:::your-bucket-name\u0026#34;, \u0026#34;arn:aws:s3:::your-bucket-name/*\u0026#34; ] }, { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;cloudfront:CreateInvalidation\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:cloudfront::YOUR-AWS-ACCOUNT-ID:distribution/DISTRIBUTION-ID\u0026#34; } ] } To find your AWS Account ID, go to IAM \u0026gt; Account settings. This policy only allows:\nS3 operations on your specific bucket only CloudFront invalidations on your specific distribution only Nothing else Important: Download your Access Key and Secret Access Key. Store them securely (never commit them to git!)\nInstalling AWS CLI Install the AWS CLI on your machine:\nMacOS brew install awscli Linux sudo apt install awscli Windows choco install awscliv2 Configuring AWS Credentials Once installed, configure your credentials:\naws configure You\u0026rsquo;ll be prompted for:\nAWS Access Key ID (from the IAM user you created) AWS Secret Access Key (from the IAM user you created) Default region (e.g., us-east-1) Default output format (just press enter) Your credentials will be stored in ~/.aws/credentials (on Linux/Mac) or %USERPROFILE%\\.aws\\credentials (on Windows).\nSecurity tip: Never commit your .aws/credentials file to git. Add it to .gitignore if you haven\u0026rsquo;t already. Also rotate your access keys every 90 days for better security.\nBuilding and Uploading Your Site Now the exciting part! First, build your Hugo site:\nhugo This generates all the static files in the public/ folder. Then, sync these files to your S3 bucket (replace your-bucket-name with your actual bucket name):\naws s3 sync public/ s3://your-bucket-name This command uploads all files from your public/ folder to your S3 bucket. You can run this command every time you want to deploy new changes!\nSetting Up CloudFront CloudFront is a Content Delivery Network (CDN) that caches your website across the globe, making it super fast for users everywhere. And it\u0026rsquo;s free!\nGo to CloudFront in the AWS Console Click \u0026ldquo;Create distribution\u0026rdquo; For \u0026ldquo;Origin domain\u0026rdquo;, select your S3 bucket Important: Under \u0026ldquo;S3 access\u0026rdquo;, select \u0026ldquo;Yes, use Origin Access Control (OAC)\u0026rdquo; This ensures only CloudFront can access your private S3 bucket AWS will show you a bucket policy to add—copy and paste it into your S3 bucket policy Scroll down and ensure \u0026ldquo;Viewer protocol policy\u0026rdquo; is set to \u0026ldquo;Redirect HTTP to HTTPS\u0026rdquo; (HTTPS only) Check that \u0026ldquo;Compress objects automatically\u0026rdquo; is enabled (makes your site faster) Click \u0026ldquo;Create distribution\u0026rdquo; CloudFront will generate a domain for you (e.g., d12345abcde.cloudfront.net) and an SSL certificate automatically. This is your website URL!\nCloudFront Function for Pretty URLs One problem: Hugo creates posts in folders like /posts/my-post/index.html, but users might access /posts/my-post/ without the index.html. To fix this, create a CloudFront function:\nIn CloudFront, click \u0026ldquo;Functions\u0026rdquo; in the left menu Click \u0026ldquo;Create function\u0026rdquo; Name it something like hugo-pretty-urls Paste this code: function handler(event) { var request = event.request; var uri = request.uri; // If URI ends with \u0026#39;/\u0026#39;, add index.html if (uri.endsWith(\u0026#39;/\u0026#39;)) { request.uri += \u0026#39;index.html\u0026#39;; } // If URI has no extension, add /index.html else if (!uri.includes(\u0026#39;.\u0026#39;)) { request.uri += \u0026#39;/index.html\u0026#39;; } return request; } Click \u0026ldquo;Publish\u0026rdquo; Go back to your distribution Click \u0026ldquo;Edit\u0026rdquo; and scroll to \u0026ldquo;Function associations\u0026rdquo; Under \u0026ldquo;Viewer request\u0026rdquo;, select \u0026ldquo;CloudFront Functions\u0026rdquo; Select the function you just created Click \u0026ldquo;Save changes\u0026rdquo; Now all your pretty URLs will work perfectly!\nInvalidating CloudFront Cache Here\u0026rsquo;s an important part: every time you upload new files to S3, CloudFront still serves the cached old version for a while. To make your changes appear instantly, you need to invalidate the CloudFront cache.\nFind your CloudFront Distribution ID:\nGo to CloudFront in the AWS Console Click your distribution Copy the Distribution ID (looks like E1ABCDEFGH123) Then run this command after each deployment (replace YOUR_DISTRIBUTION_ID with your actual ID):\naws cloudfront create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths \u0026#34;/*\u0026#34; This tells CloudFront to refresh all files (/*), so your latest changes appear immediately.\nCustom Domain (Optional) If you have a custom domain, you can point it to CloudFront:\nIn CloudFront, edit your distribution and add your domain under \u0026ldquo;Alternate domain names\u0026rdquo; CloudFront will automatically create and manage an SSL certificate for your domain (AWS Certificate Manager) In Route 53 (or your domain provider), create an A record (not CNAME) pointing to your CloudFront domain Your site will now be accessible at https://yourdomain.com with automatic HTTPS!\nAutomating Deployments Every time you update your blog, you need to:\nBuild your site Upload files to S3 Invalidate CloudFront cache Instead of running three commands, create a deployment script. Save this as deploy.sh:\n#!/bin/bash set -e BUCKET_NAME=\u0026#34;your-bucket-name\u0026#34; DISTRIBUTION_ID=\u0026#34;YOUR_DISTRIBUTION_ID\u0026#34; echo \u0026#34;Building Hugo site...\u0026#34; hugo echo \u0026#34;Uploading to S3...\u0026#34; aws s3 sync public/ s3://$BUCKET_NAME echo \u0026#34;Invalidating CloudFront cache...\u0026#34; aws cloudfront create-invalidation --distribution-id $DISTRIBUTION_ID --paths \u0026#34;/*\u0026#34; echo \u0026#34;✓ Deployment complete! Changes live in ~30 seconds.\u0026#34; Make it executable:\nchmod +x deploy.sh Now deploy with one command:\n./deploy.sh If you prefer a one-liner without a script:\nhugo \u0026amp;\u0026amp; aws s3 sync public/ s3://your-bucket-name \u0026amp;\u0026amp; aws cloudfront create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths \u0026#34;/*\u0026#34; Summary You now have a blazingly fast, globally distributed blog hosted on AWS, all for free! Here\u0026rsquo;s what we accomplished:\nCreated a private S3 bucket with versioning enabled Set up an IAM user with least-privilege custom policies Configured AWS CLI for easy uploads Set up CloudFront with Origin Access Control (OAC) Enforced HTTPS encryption for all traffic Created a CloudFront function to handle pretty URLs Set up cache invalidation to make changes appear instantly Automated the entire deployment process with a script AWS Best Practices Summary Here\u0026rsquo;s what makes this setup secure and resilient:\nLeast Privilege IAM Policy - The IAM user can only access your specific S3 bucket and CloudFront distribution, nothing else Private S3 Bucket - Your S3 bucket is not publicly accessible; only CloudFront can read from it via Origin Access Control HTTPS Enforced - All traffic is encrypted via CloudFront\u0026rsquo;s automatic SSL certificates Versioning Enabled - You can recover from accidental deletions or corrupted uploads Access Logs - CloudFront can be configured to log all requests for security audits Short-lived Access Keys - Rotate your IAM access keys every 90 days (never hardcode secrets in scripts!) Your Hugo blog is now live on the internet with enterprise-grade AWS security! Pretty cool, right?\n","permalink":"https://robzah.com/posts/2026-03-23-deploy-hugo-to-aws-s3-cloudfront/","summary":"\u003cp\u003eDeploy your Hugo blog to AWS S3 and CloudFront for a secure, globally-distributed website. This complete guide covers architecture diagrams, security best practices, and automation to get your blog live with a global CDN.\u003c/p\u003e\n\u003c!-- more --\u003e\n\u003ch2 id=\"prerequisites\"\u003ePrerequisites\u003c/h2\u003e\n\u003cp\u003eThis post is the \u003cstrong\u003esecond part\u003c/strong\u003e of a series on building and hosting a blog with Hugo. Before diving into deployment, make sure you\u0026rsquo;ve completed the first post:\u003c/p\u003e\n\u003cp\u003e📖 \u003cstrong\u003e\u003ca href=\"/posts/create-your-own-hugo-blog/\"\u003eCreate Your Own Hugo Blog\u003c/a\u003e\u003c/strong\u003e - Learn how to install Hugo, set up a site, install PaperMod, create posts, and configure metadata.\u003c/p\u003e","title":"Deploy Hugo to AWS S3 and CloudFront"},{"content":"Learn how to create a fast, secure static blog with Hugo and PaperMod. This step-by-step guide covers installation, theme setup, creating posts, and how to add rich metadata to improve your blog\u0026rsquo;s SEO and appearance.\nFor some time I\u0026rsquo;ve been looking in ways to create my own blog, but also to learn from the process. As I have a note taking app (Joplin) which is based on Markdown files and stored in AWS S3, I was looking for a blog that is based on Markdown files.\nI will spare the whole research I\u0026rsquo;ve been doing, but my conclusion was to use Hugo. This is also the tool that I\u0026rsquo;m using for this blog where you are reading this from.\nWhat is Hugo? Basically Hugo is a static website generator written in the programming language Go. Instead of running a traditional CMS with a database (like WordPress), Hugo generates a fully static website from Markdown files.\nBasically you create Markdown files and Hugo will generate Static HTML/CSS/JS files such that you can host it wherever you want.\nIt\u0026rsquo;s the most simple way to create a website and host it as cheap as possible (because it\u0026rsquo;static). This means:\nNo database Extremely fast websites Very secure (because there is no backend) Easy to version control with Git Perfect for developer who like writing in Markdown Perfect for backend developers that want a website, but not the hassle of creating frontends You simply write blog posts in Markddown, run a build command and Hugo generates a complete website.\nIn the following sections I will give a brief overview of how to use Hugo and how to create your website.\nInstalling Hugo THe easiest way to install Hugo depends on your operating system. After installation and running:\nhugo version You should be able to see something like:\nhugo v0.xx.x MacOS brew install hugo Linux sudo apt install hugo Windows choco install hugo Creating a new Hugo site With Hugo, it\u0026rsquo;s very easy to create a new website, just run:\nhugo new site my-blog This will create a folder structure and all you need to start building with Hugo. Just \u0026lsquo;cd\u0026rsquo; into the folder and now you can run:\nhugo server I will spin up a server where you directly can go to http://localhost:1313 and check your website!\nOff course, this just looks \u0026ldquo;boring\u0026rdquo; and we want to install some sort of theme to work with and make Hugo as much as you want. Therefore, in the next section we will install the PaperMod theme which I will use as an example for our blog. If you want another theme, just visit: https://themes.gohugo.io to check out all themes. I will recommend you to read the documentation well per theme, as the installation and settings can differ.\nInstalling PaperMod and start using it To make your blog look great, you’ll want to use a theme. I chose PaperMod because it’s clean, fast, and easy to customize.\nTo install PaperMod, first navigate to your Hugo site folder (the one you created earlier):\ncd my-blog Then add the theme as a Git submodule (recommended, so you can easily update it later):\ngit init git submodule add https://github.com/adityatelange/hugo-PaperMod.git themes/PaperMod Now, open your config.toml (or config.yaml/config.json depending on your preference) and set the theme:\ntheme = \u0026#34;PaperMod\u0026#34; PaperMod comes with lots of options. You can customize the look, enable features like search, and tweak the layout. Check the PaperMod documentation for all the details.\nCreating your first post Let’s create your first blog post! Run:\nhugo new posts/my-first-post.md This will create a Markdown file in the content/posts/ directory. Open it in your favorite editor, add a title, some content, and save.\nTo see your changes, just run:\nhugo server Visit http://localhost:1313 in your browser. You should see your new post live!\nDeploying your blog Once you’re happy with your blog, you’ll want to put it online. Hugo generates static files in the public/ folder. You can host these files anywhere: GitHub Pages, Netlify, AWS S3, or any static hosting provider.\nFor example, to build your site:\nhugo Then upload the contents of the public/ folder to your hosting provider.\nFinal thoughts Building a blog with Hugo is a great way to learn about static sites, Markdown, and web publishing. It’s fast, secure, and gives you full control over your content. Plus, you can always customize or extend it as your needs grow.\nHappy blogging!\nUnderstanding hugo.yaml The hugo.yaml file is the main configuration file for your Hugo site. It controls your website’s title, base URL, language, theme, menus, and much more. Changing values here will immediately affect how your site looks and behaves.\nFor example, here’s a basic hugo.yaml:\nbaseURL: https://robzah.com/ languageCode: en-us title: Robert Zaharia theme: [\u0026#34;PaperMod\u0026#34;] You can add menus, enable features, and configure your theme directly in this file. If you want to change your site’s title, just update the title field. To add a new menu item, edit the menu section.\nAdding PaperMod-specific options PaperMod supports many options to customize your blog’s appearance and features. You can add these options to your hugo.yaml under the params section. Here’s an example with some popular PaperMod settings:\nparams: homeInfoParams: Title: \u0026#34;Welcome to My Blog\u0026#34; Content: \u0026#34;This is my personal blog built with Hugo and PaperMod.\u0026#34; socialIcons: - name: github url: \u0026#34;https://github.com/yourusername\u0026#34; - name: twitter url: \u0026#34;https://twitter.com/yourusername\u0026#34; ShowReadingTime: true ShowShareButtons: true ShowCodeCopyButtons: true ShowToc: true You can find all available options in the PaperMod documentation. Try enabling features like reading time, table of contents, or social icons to make your blog more engaging.\nAdding and improving blog post metadata Each blog post in Hugo starts with a section called \u0026ldquo;front matter\u0026rdquo; at the top of the Markdown file. This is where you add metadata about your post, such as the title, date, tags, and more. Good metadata helps readers and search engines understand your content.\nHere’s an example of a rich front matter block:\n--- title: \u0026#34;How to Use Hugo Effectively\u0026#34; date: 2026-03-23T10:00:00+01:00 draft: false tags: [\u0026#34;hugo\u0026#34;, \u0026#34;static site\u0026#34;, \u0026#34;tutorial\u0026#34;] categories: [\u0026#34;Web Development\u0026#34;] description: \u0026#34;A step-by-step guide to building a blog with Hugo.\u0026#34; author: \u0026#34;Robert Zaharia\u0026#34; cover: image: \u0026#34;/images/hugo-cover.jpg\u0026#34; alt: \u0026#34;Hugo static site generator logo\u0026#34; caption: \u0026#34;Build fast, modern blogs with Hugo.\u0026#34; ShowToc: true ShowReadingTime: true --- Tips to make your posts better:\nAdd a description for SEO and social sharing. Use tags and categories to organize your content. Add a cover image for visual appeal. Set ShowToc and ShowReadingTime to true for better navigation and user experience (if your theme supports it). Include an author field if you have multiple contributors. The more metadata you add, the richer and more discoverable your blog becomes!\nWhat\u0026rsquo;s Next? Now that you have a beautiful blog running locally, it\u0026rsquo;s time to share it with the world! Check out the next post in this series:\n🚀 Deploy Hugo to AWS S3 and CloudFront - Learn how to deploy your blog to AWS using S3 for storage and CloudFront for global CDN, all for free with the AWS free tier!\nThis guide covers:\nSetting up AWS S3 bucket and CloudFront Configuring secure IAM credentials Automating deployments with a script Following AWS best practices for security Your blog will be live and globally distributed in minutes!\n","permalink":"https://robzah.com/posts/2026-03-22-create-your-own-hugo-blog/","summary":"\u003cp\u003eLearn how to create a fast, secure static blog with Hugo and PaperMod. This step-by-step guide covers installation, theme setup, creating posts, and how to add rich metadata to improve your blog\u0026rsquo;s SEO and appearance.\u003c/p\u003e\n\u003c!-- more --\u003e\n\u003cp\u003eFor some time I\u0026rsquo;ve been looking in ways to create my own blog, but also to learn from the process. As I have a note taking app (Joplin) which is based on Markdown files and stored in AWS S3, I was looking for a blog that is based on Markdown files.\u003c/p\u003e","title":"Create Your Own Hugo Blog"},{"content":"Set up Joplin with AWS S3 for a secure, affordable note-taking system that syncs across all your devices (macOS, Windows, iOS, Android). This step-by-step guide shows you exactly how to configure everything without the magic—all the steps included.\nFor some time I wanted to have a central place where I can store my notes. The main problem was, that I want to have it as cheap as possible, AND I want it to be available on my phone (iOS), Macbook (macOS) and Windows (my working laptop).\nI wanted a Markdown-based writing setup that:\nWorks on macOS, Windows, iOS, Android Syncs through infrastructure I control Is cheap (almost free) Can later be automated into a Hugo blog This guide shows you exactly how to set up Joplin + AWS S3 sync,\nincluding how to properly test everything using the AWS CLI.\nNo magic. No skipped steps.\nStep 1 — Install Joplin Desktop Go to https://joplinapp.org Click Download Install the desktop version for your OS Open Joplin (Optional: Install on iOS or Android later.)\nStep 2 — Create an S3 Bucket Go to https://console.aws.amazon.com Search for S3 Click Create bucket Configure:\nBucket name: joplin-notes-yourname Region: Choose one near you (example: eu-central-1) Block Public Access: Leave enabled Bucket Versioning: Disable Default encryption: Enable (AES-256) Click Create bucket.\nStep 3 — Create an IAM User Search for IAM in the AWS Console Click Users Click Create user Username: joplin-sync-user Click Next.\nAttach Permissions Click Create policy → switch to JSON tab → paste:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:ListBucket\u0026#34;, \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:s3:::joplin-notes-yourname\u0026#34;, \u0026#34;arn:aws:s3:::joplin-notes-yourname/*\u0026#34; ] } ] } ⚠ Replace joplin-notes-yourname with your actual bucket name.\nCreate the policy and attach it to the user.\nStep 4 — Create Access Keys Open the created user Go to Security credentials Click Create access key Choose: Application running outside AWS Copy:\nAccess Key ID Secret Access Key You will not see the secret again.\nStep 5 — Install AWS CLI Check if installed:\naws --version If not installed (macOS):\nbrew install awscli Or download from:\nhttps://aws.amazon.com/cli/\nStep 6 — Configure AWS CLI aws configure Enter:\nAWS Access Key ID AWS Secret Access Key Default region (example: eu-central-1) Output format: json Step 7 — Test S3 Access List bucket aws s3 ls s3://joplin-notes-yourname Upload test file echo \u0026#34;test\u0026#34; \u0026gt; test.txt aws s3 cp test.txt s3://joplin-notes-yourname/test.txt Download test file aws s3 cp s3://joplin-notes-yourname/test.txt downloaded.txt cat downloaded.txt Delete test file aws s3 rm s3://joplin-notes-yourname/test.txt If all commands work, your AWS setup is correct.\nOnly then continue.\nStep 8 — Configure Joplin Open Joplin → Settings → Synchronisation\nSet:\nTarget: Amazon S3 Bucket: joplin-notes-yourname Region: eu-central-1 Access Key: your key Secret Key: your secret Endpoint: https://s3.eu-central-1.amazonaws.com Force path style: Enable Click Check synchronisation configuration.\nIf OK → Click Synchronise.\nStep 9 — Enable Encryption (Recommended) Go to:\nSettings → Encryption → Enable\nNow your notes are encrypted before being uploaded to S3.\nConclusion You now have:\nMarkdown-based notes AWS-controlled sync IAM-secured access A foundation for Hugo automation ","permalink":"https://robzah.com/posts/2026-03-15-sync-joplin-with-aws-s3/","summary":"\u003cp\u003eSet up Joplin with AWS S3 for a secure, affordable note-taking system that syncs across all your devices (macOS, Windows, iOS, Android). This step-by-step guide shows you exactly how to configure everything without the magic—all the steps included.\u003c/p\u003e\n\u003c!-- more --\u003e\n\u003cp\u003eFor some time I wanted to have a central place where I can store my notes. The main problem was, that I want to have it as cheap as possible, AND I want it to be available on my phone (iOS), Macbook (macOS) and Windows (my working laptop).\u003c/p\u003e","title":"Sync Joplin with AWS S3 (Step-by-Step Tutorial)"}]