I haven't exactly made it a secret that I use Pelican to write this blog. Sometimes I'm active, and sometimes I'm not. As pelican is a static site generator based on templates and text files, I have been doing the build and deployment by hand. When I have been away for a while, I sometimes find that I have changed machines and have to spend some time setting up my local machine to release an article. This doesn't take too much time, but sometimes I just want to pound out an article and don't have the machine set up.
In the past, I had thought about setting up some sort of AWS lambda function, docker container, or even a dedicated server for auto-deployment... I just didn't want the hassle of operating yet another machine. I took a few minutes to investigate bitbucket pipelines and I am impressed! I have been using bitbucket to control the files for the blog for years already, but now I can simply commit to
master to build and deploy the site to the production AWS server or commit to
test to get a preview. Too easy! Why didn't I do this before!?
No need to go outside your of your normal repository control... no external service required... and it comes with 50 min of build time each month. That should be more than enough to get your your blog fix.
This step isn't actually necessary, but it will ensure that your pelican build environment is actually functional before trying to commit to bitbucket and wait for it to build.
$ > virtualenv -p python3 venv $ > venv/bin/pip install pelican markdown awscli $ > venv/bin/pip freeze > requirements.txt
There is apparently a bit of a glitch with
pkg-resources in debian-based environments. It is safe to delete the line in
requirements.txt that specifies
Add any plugins to your repository and configuration files.
This isn't intended to show you how to use pelican, but I generally execute
venv/bin/pelican -s pelicanconf.py in order to build my site. My site is contained in
./output when it is complete. We will see references to this path in the next step.
Now that I can build locally with relative paths in my configuration file, I can proceed...
You should have a bucket assigned to AWS. The bucket that hosts this site is called
forembed.com. The bucket should be configured to host a site. There are numerous guides to this, but basically, you have to:
Here, you will create a 'user' that bitbucket will utilize to interact with your AWS S3 bucket.
deployment-s3. Be sure to attach the
AmazonS3FullAccesspolicy to the group.
bitbucketand add the user to the
deployment-s3group (unless you named it something else). Be sure to tick the "Programmatic Access" box when prompted.
There is a bit of a chicken-and-egg dilemma at this point. You have to enable bitbucket-pipelines to set environment variables for pipelines, but your deployment won't work without enviironment variables. Just enable the pipelines so that you can set your environment variables here.
First, lets have a look at the
image: python:3.5.1 pipelines: # create a full build that may be accessed at # http://test-site-deployment.s3-website-us-east-1.amazonaws.com default: - step: name: Build and deploy to test server deployment: test script: # Modify the commands below to build your repository. - pip3 install -r requirements.txt - pelican -s pelicanconf.py - aws s3 sync --delete ./output s3://test-site-deployment/ --acl public-read branches: # commits to the master branch will deploy a new site at http://forembed.com master: - step: name: Build and deploy to production server deployment: production script: - pip3 install -r requirements.txt - pelican -s pelicanconf.py - aws s3 sync --delete ./output s3://forembed.com --acl public-read
The file is fairly readable for a knowledgeable guy, but there are two nearly identical pipelines shown here, one for all branches not named and
master. Three primary stages:
pelican -s pelicanconf.py
You will need to specify
pip3 to use python 3. If you have a script that you want to execute using the python 3 environment, be sure to preface with
python3. I don't know why python 2 was the default.
--deletewill delete the contents of the bucket
./outputspecifies that only the
outputpath should be uploaded, not the entire git repository
--acl public-readsets the permissions so that the site is generally accessible
I will probably build some version of this into my other projects as well. Go Atlassian!
The AWS and Bitbucket documentation are pretty good, but I found a blog post particularly helpful when it came to deployment to S3.