Using Amazon S3 as a Hosted Yum Repository

Amazon Simple Storage Service (S3) has many use cases. As you may already know, one of those uses is hosting static websites. We can use this feature for hosting RPMs in the form of a Yum repository as well. This eliminates the need for a dedicated server for serving Yum requests from the local system.

The following four steps detail how to host a working Yum repository on S3. We will cover creating GPG keys, signing RPMs, installing packages required for doing so, and finally syncing it all up to S3. Feel free to skip over any steps if you already have them covered. This tutorial was carried out on a recent Amazon Linux AMI and should work for EL6-type systems as well.

1. Preparing your S3 bucket:

First, create an S3 bucket. In this example, we will use the S3 bucket name “joeuser-rpm”.

Then, create an S3 bucket policy for granting permissions to resources in your bucket. You can use the policy generator to create a policy. See the following screenshot for the settings used in this example:(screenshot here)

The values input above produce the S3 bucket policy below:
{
"Id": "Policy1418237148978",
"Statement": [
{
"Sid": "Stmt1418237146582",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::joeuser-rpm/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}

You can enter the policy by opening the bucket within the S3 console and clicking on Permissions and Edit bucket policy. Paste your policy into the form and save.

Finally, set up a static site on the S3 bucket. This can be done by opening the bucket within the S3 console and clicking on Static Website Hosting. Click on the radio button Enable Website Hosting and enter an index document value (e.g. index.htm). This file does not need to exist and in this example was never created. Alternatively, you can use the AWS CLI to do the same thing:
# aws s3 website --index-document index.htm s3://joeuser-rpm

If you have an existing Yum repo elsewhere you can sync the files to your S3 bucket and change the baseurl value to the DNS name of your S3 bucket (or CNAME). The remaining steps are for folks signing packages and creating a Yum repo for the first time.

2. Prepare your server for signing RPMs and creating a Yum repo:

You must have a server or workstation available in which to sign your packages and create the Yum repo (index and metadata). The following steps cover this process.

First, install the necessary packages:
$ sudo yum -y groupinstall "Development Tools"
$ sudo yum -y install python-pip createrepo pinentry rpm-sign expect
$ sudo pip install awscli --upgrade

Generate a GPG keypair:
$ gpg --gen-key
gpg (GnuPG) 2.0.25; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: directory `/home/ec2-user/.gnupg’ created
gpg: new configuration file `/home/ec2-user/.gnupg/gpg.conf’ created
gpg: WARNING: options in `/home/ec2-user/.gnupg/gpg.conf’ are not yet active during this run
gpg: keyring `/home/ec2-user/.gnupg/secring.gpg’ created
gpg: keyring `/home/ec2-user/.gnupg/pubring.gpg’ created
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection?
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: Joe User
Email address: [email protected]
Comment: Key used for signing RPMs

You selected this USER-ID:
“Joe User (Key used for signing RPMs) <[email protected]>”

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /home/ec2-user/.gnupg/trustdb.gpg: trustdb created
gpg: key 78BD20E1 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
pub 2048R/78BD20E1 2014-12-10
Key fingerprint = 8EBA CC24 A36D 5985 4A32 8EC2 B5B7 434F 78BD 20E1
uid [ultimate] Joe User (Key used for signing RPMs) <[email protected]>
sub 2048R/E70BEDC4 2014-12-10
This gives us the ID of our key pair: 78BD20E1.

The following commands should be run in order to export the GPG key pair which was just created:
$ gpg --output RPM-GPG-KEY-joeuser --armor --export 78BD20E1
$ gpg --output RPM-GPG-KEY-joeuser.private --armor --export-secret-key 78BD20E1

You can see that we’re using the same key ID retrieved in the previous step.

(Optional) *If you move this environment onto a different system, you’ll have to import your GPG keys:
$ gpg --import ~/RPM-GPG-KEY-joeuser
$ gpg --allow-secret-key-import --import ~/RPM-GPG-KEY-joeuser.private

*Back up the private key in a secure, preferably offline location. The private key should only live on a system which is used for signing RPMs.

Import the public GPG key into your RPM keyring:
$ sudo rpm --import RPM-GPG-KEY-joeuser

After importing the public GPG key, run the following command to verify the key has been imported correctly:
$ rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'
gpg-pubkey-21c0f39f-4e41dbdc --> gpg(Amazon Linux AMI (GA) <[email protected]>)
gpg-pubkey-78bd20e1-54887527 --> gpg(Joe User (Key used for signing RPMs) <[email protected]>)

Create the file ~/.rpmmacros with the following contents, replacing the gpg_name with your own:
%_signature gpg
%_gpg_name Joe User (Key used for signing RPMs) <[email protected]>

(Optional) – If you want to input your passphrase manually, skip this step:
Create an executable script named ~/sign_rpm.sh with the following contents:
#!/usr/bin/expect -f

spawn rpm –addsign {*}$argv
expect -exact “Enter pass phrase: ”
send — “testing123\r”
expect eof
Be sure to replace the passphrase for your key with your own, keeping the \r intact. This script will be used for adding a signature to an RPM and automating automatically entering the passphrase upon request. This step can be done manually if it’s preferred not to keep the passphrase in clear-text on the file system.

Create an executable script called ~/update_repo.sh with the following contents:
#!/usr/bin/env bash

# Create repo
YUMREPO_PATH=”/home/ec2-user/yumrepo”
S3_BUCKET=”joeuser-rpm”

cd “$YUMREPO_PATH”
for arch in x86_64 i386 noarch SRPMS
do
createrepo –deltas “$arch”
done

# Sync to S3
aws s3 sync –recursive –delete “$YUMREPO_PATH” s3://”$S3_BUCKET”/
Be sure to replace the yum repo path and S3 bucket variables as needed.

Prepare the directory structure for your RPMs:
$ mkdir -p ~/yumrepo/{noarch,i386,x86_64,SRPMS}
You will be moving your RPMs into these respective directories soon.

3. Sign your RPMs and upload to S3:

You are now prepared to sign RPMs, create the repo index, and upload to S3. Note in this example, the RPM python-pip grabbed from EPEL is used. More often than not, your RPMs will be for proprietary software, your own RPM creations, or if you simply want to hold onto a version of a particular package.

Run your sign.sh script against a RPM file. In this example, we’re using a python-pip RPM taken from EPEL.
$ ./sign_rpm.sh python-pip-1.3.1-4.el6.noarch.rpm
spawn rpm --addsign python-pip-1.3.1-4.el6.noarch.rpm
Enter pass phrase:
Pass phrase is good.
python-pip-1.3.1-4.el6.noarch.rpm:

Now that your RPM is signed, move it into the appropriate directory. Since python-pip is noarch, it will be moved into ~/yumrepo/noarch/:
$ mv python-pip-1.3.1-4.el6.noarch.rpm yumrepo/noarch/

Now that your RPM has been placed in the correct directory, you can run update_repo.sh, which will build your RPM index and metadata:
$ ./update_repo.sh

4. Adding your new .repo and installing a package:

On a remote machine (or the machine you’re testing from), add the joeuser.repo file into /etc/yum.repos.d/:
[joeuser-noarch]
name=Joe User's Repo
baseurl=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/noarch/
enabled=1
gpgkey=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/RPM-GPG-KEY-joeuser
gpgcheck=1

[joeuser-i386]
name=Joe User’s Repo
baseurl=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/i386/
enabled=1
gpgkey=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/RPM-GPG-KEY-joeuser
gpgcheck=1

[joeuser-x86_64]
name=Joe User’s Repo
baseurl=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/x86_64/
enabled=1
gpgkey=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/RPM-GPG-KEY-joeuser
gpgcheck=1

[joeuser-SRPMS]
name=Joe User’s Repo
baseurl=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/SRPMS/
enabled=1
gpgkey=http://joeuser-rpm.s3-website-us-east-1.amazonaws.com/RPM-GPG-KEY-joeuser
gpgcheck=1

Now that the yum repo file has been added, run the following commands to update the yum cache and list packages available on your new repo:
$ sudo yum makecache
$ sudo yum --disablerepo="*" --enablerepo="joeuser*" list available

Loaded plugins: priorities, update-motd, upgrade-helper
joeuser-SRPMS | 3.3 kB 00:00
joeuser-SRPMS/primary_db | 1.1 kB 00:00
joeuser-i386 | 3.3 kB 00:00
joeuser-i386/primary_db | 1.1 kB 00:00
joeuser-noarch | 3.3 kB 00:00
joeuser-noarch/primary_db | 1.9 kB 00:00
joeuser-x86_64 | 3.3 kB 00:00
joeuser-x86_64/primary_db | 1.1 kB 00:00
Available Packages
python-pip.noarch 1.3.1-4.el6 joeuser-noarch

Congratulations! You now have a functional Yum repository hosted on a highly resilient and available file store 🙂