1
0
Fork 0
mirror of https://github.com/moby/moby.git synced 2022-11-09 12:21:53 -05:00

docker-ci 0.5.6: Fully dockerize docker-ci. Add build test coverage. Add backup builder.

Docker-DCO-1.1-Signed-off-by: Daniel Mizyrycki <daniel@docker.com> (github: mzdaniel)
This commit is contained in:
Daniel Mizyrycki 2014-02-14 20:50:16 -08:00
parent 382659e03a
commit b7db2d5f80
33 changed files with 463 additions and 650 deletions

View file

@ -1,56 +0,0 @@
docker-ci
=========
docker-ci is our buildbot continuous integration server,
building and testing docker, hosted on EC2 and reachable at
http://docker-ci.dotcloud.com
Deployment
==========
# Load AWS credentials
export AWS_ACCESS_KEY_ID=''
export AWS_SECRET_ACCESS_KEY=''
export AWS_KEYPAIR_NAME=''
export AWS_SSH_PRIVKEY=''
# Load buildbot credentials and config
export BUILDBOT_PWD=''
export IRC_PWD=''
export IRC_CHANNEL='docker-dev'
export SMTP_USER=''
export SMTP_PWD=''
export EMAIL_RCP=''
# Load registry test credentials
export REGISTRY_USER=''
export REGISTRY_PWD=''
cd docker/testing
vagrant up --provider=aws
github pull request
===================
The entire docker pull request test workflow is event driven by github. Its
usage is fully automatic and the results are logged in docker-ci.dotcloud.com
Each time there is a pull request on docker's github project, github connects
to docker-ci using github's rest API documented in http://developer.github.com/v3/repos/hooks
The issued command to program github's notification PR event was:
curl -u GITHUB_USER:GITHUB_PASSWORD -d '{"name":"web","active":true,"events":["pull_request"],"config":{"url":"http://docker-ci.dotcloud.com:8011/change_hook/github?project=docker"}}' https://api.github.com/repos/dotcloud/docker/hooks
buildbot (0.8.7p1) was patched using ./testing/buildbot/github.py, so it
can understand the PR data github sends to it. Originally PR #1603 (ee64e099e0)
implemented this capability. Also we added a new scheduler to exclusively filter
PRs. and the 'pullrequest' builder to rebase the PR on top of master and test it.
nighthly release
================
The nightly release process is done by buildbot, running a DinD container that downloads
the docker repository and builds the release container. The resulting docker
binary is then tested, and if everything is fine, the release is done.

View file

@ -1,47 +1,29 @@
# VERSION: 0.25
# DOCKER-VERSION 0.6.6
# AUTHOR: Daniel Mizyrycki <daniel@docker.com>
# DESCRIPTION: Deploy docker-ci on Digital Ocean
# COMMENTS:
# CONFIG_JSON is an environment variable json string loaded as:
#
# export CONFIG_JSON='
# { "DROPLET_NAME": "docker-ci",
# "DO_CLIENT_ID": "Digital_Ocean_client_id",
# "DO_API_KEY": "Digital_Ocean_api_key",
# "DOCKER_KEY_ID": "Digital_Ocean_ssh_key_id",
# "DOCKER_CI_KEY_PATH": "docker-ci_private_key_path",
# "DOCKER_CI_PUB": "$(cat docker-ci_ssh_public_key.pub)",
# "DOCKER_CI_KEY": "$(cat docker-ci_ssh_private_key.key)",
# "BUILDBOT_PWD": "Buildbot_server_password",
# "IRC_PWD": "Buildbot_IRC_password",
# "SMTP_USER": "SMTP_server_user",
# "SMTP_PWD": "SMTP_server_password",
# "PKG_ACCESS_KEY": "Docker_release_S3_bucket_access_key",
# "PKG_SECRET_KEY": "Docker_release_S3_bucket_secret_key",
# "PKG_GPG_PASSPHRASE": "Docker_release_gpg_passphrase",
# "INDEX_AUTH": "Index_encripted_user_password",
# "REGISTRY_USER": "Registry_test_user",
# "REGISTRY_PWD": "Registry_test_password",
# "REGISTRY_BUCKET": "Registry_S3_bucket_name",
# "REGISTRY_ACCESS_KEY": "Registry_S3_bucket_access_key",
# "REGISTRY_SECRET_KEY": "Registry_S3_bucket_secret_key",
# "IRC_CHANNEL": "Buildbot_IRC_channel",
# "EMAIL_RCP": "Buildbot_mailing_receipient" }'
#
#
# TO_BUILD: docker build -t docker-ci .
# TO_DEPLOY: docker run -e CONFIG_JSON="${CONFIG_JSON}" docker-ci
# DOCKER-VERSION: 0.7.6
# AUTHOR: Daniel Mizyrycki <daniel@dotcloud.com>
# DESCRIPTION: docker-ci continuous integration service
# TO_BUILD: docker build -rm -t docker-ci/docker-ci .
# TO_RUN: docker run -rm -i -t -p 8000:80 -p 2222:22 -v /run:/var/socket \
# -v /data/docker-ci:/data/docker-ci docker-ci/docker-ci
from ubuntu:12.04
maintainer Daniel Mizyrycki <daniel@dotcloud.com>
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' \
> /etc/apt/sources.list
run apt-get update; apt-get install -y git python2.7 python-dev libevent-dev \
python-pip ssh rsync less vim
run pip install requests fabric
ENV DEBIAN_FRONTEND noninteractive
RUN echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > \
/etc/apt/sources.list; apt-get update
RUN apt-get install -y --no-install-recommends python2.7 python-dev \
libevent-dev git supervisor ssh rsync less vim sudo gcc wget nginx
RUN cd /tmp; wget http://python-distribute.org/distribute_setup.py
RUN cd /tmp; python distribute_setup.py; easy_install pip; rm distribute_setup.py
# Add deployment code and set default container command
add . /docker-ci
cmd "/docker-ci/deployment.py"
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
RUN echo 'deb http://get.docker.io/ubuntu docker main' > \
/etc/apt/sources.list.d/docker.list; apt-get update
RUN apt-get install -y lxc-docker-0.8.0
RUN pip install SQLAlchemy==0.7.10 buildbot buildbot-slave pyopenssl boto
RUN ln -s /var/socket/docker.sock /run/docker.sock
ADD . /docker-ci
RUN /docker-ci/setup.sh
ENTRYPOINT ["supervisord", "-n"]

View file

@ -1,26 +1,65 @@
=======
testing
=======
=========
docker-ci
=========
This directory contains docker-ci testing related files.
This directory contains docker-ci continuous integration system.
As expected, it is a fully dockerized and deployed using
docker-container-runner.
docker-ci is based on Buildbot, a continuous integration system designed
to automate the build/test cycle. By automatically rebuilding and testing
the tree each time something has changed, build problems are pinpointed
quickly, before other developers are inconvenienced by the failure.
We are running buildbot at Rackspace to verify docker and docker-registry
pass tests, and check for coverage code details.
docker-ci instance is at https://docker-ci.docker.io/waterfall
Inside docker-ci container we have the following directory structure:
/docker-ci source code of docker-ci
/data/backup/docker-ci/ daily backup (replicated over S3)
/data/docker-ci/coverage/{docker,docker-registry}/ mapped to host volumes
/data/buildbot/{master,slave}/ main docker-ci buildbot config and database
/var/socket/{docker.sock} host volume access to docker socket
Buildbot
========
Production deployment
=====================
Buildbot is a continuous integration system designed to automate the
build/test cycle. By automatically rebuilding and testing the tree each time
something has changed, build problems are pinpointed quickly, before other
developers are inconvenienced by the failure.
::
We are running buildbot in Amazon's EC2 to verify docker passes all
tests when commits get pushed to the master branch and building
nightly releases using Docker in Docker awesome implementation made
by Jerome Petazzoni.
# Clone docker-ci repository
git clone https://github.com/dotcloud/docker
cd docker/hack/infrastructure/docker-ci
https://github.com/jpetazzo/dind
export DOCKER_PROD=[PRODUCTION_SERVER_IP]
Docker's buildbot instance is at http://docker-ci.dotcloud.com/waterfall
# Create data host volume. (only once)
docker -H $DOCKER_PROD run -v /home:/data ubuntu:12.04 \
mkdir -p /data/docker-ci/coverage/docker
docker -H $DOCKER_PROD run -v /home:/data ubuntu:12.04 \
mkdir -p /data/docker-ci/coverage/docker-registry
docker -H $DOCKER_PROD run -v /home:/data ubuntu:12.04 \
chown -R 1000.1000 /data/docker-ci
For deployment instructions, please take a look at
hack/infrastructure/docker-ci/Dockerfile
# dcr deployment. Define credentials and special environment dcr variables
# ( retrieved at /hack/infrastructure/docker-ci/dcr/prod/docker-ci.yml )
export WEB_USER=[DOCKER-CI-WEBSITE-USERNAME]
export WEB_IRC_PWD=[DOCKER-CI-WEBSITE-PASSWORD]
export BUILDBOT_PWD=[BUILDSLAVE_PASSWORD]
export AWS_ACCESS_KEY=[DOCKER_RELEASE_S3_ACCESS]
export AWS_SECRET_KEY=[DOCKER_RELEASE_S3_SECRET]
export GPG_PASSPHRASE=[DOCKER_RELEASE_PASSPHRASE]
export BACKUP_AWS_ID=[S3_BUCKET_CREDENTIAL_ACCESS]
export BACKUP_AWS_SECRET=[S3_BUCKET_CREDENTIAL_SECRET]
export SMTP_USER=[MAILGUN_SMTP_USERNAME]
export SMTP_PWD=[MAILGUN_SMTP_PASSWORD]
export EMAIL_RCP=[EMAIL_FOR_BUILD_ERRORS]
# Build docker-ci and testbuilder docker images
docker -H $DOCKER_PROD build -rm -t docker-ci/docker-ci .
(cd testbuilder; docker -H $DOCKER_PROD build -rm -t docker-ci/testbuilder .)
# Run docker-ci container ( assuming no previous container running )
(cd dcr/prod; dcr docker-ci.yml start)
(cd dcr/prod; dcr docker-ci.yml register docker-ci.docker.io)

View file

@ -1 +1 @@
0.4.5
0.5.6

View file

@ -1 +0,0 @@
Buildbot configuration and setup files

View file

@ -1,18 +0,0 @@
[program:buildmaster]
command=twistd --nodaemon --no_save -y buildbot.tac
directory=/data/buildbot/master
chown= root:root
redirect_stderr=true
stdout_logfile=/var/log/supervisor/buildbot-master.log
stderr_logfile=/var/log/supervisor/buildbot-master.log
[program:buildworker]
command=twistd --nodaemon --no_save -y buildbot.tac
directory=/data/buildbot/slave
chown= root:root
redirect_stderr=true
stdout_logfile=/var/log/supervisor/buildbot-slave.log
stderr_logfile=/var/log/supervisor/buildbot-slave.log
[group:buildbot]
programs=buildmaster,buildworker

View file

@ -17,7 +17,7 @@
"""
github_buildbot.py is based on git_buildbot.py
github_buildbot.py will determine the repository information from the JSON
github_buildbot.py will determine the repository information from the JSON
HTTP POST it receives from github.com and build the appropriate repository.
If your github repository is private, you must add a ssh key to the github
repository for the user who initiated the build on the buildslave.
@ -88,7 +88,8 @@ def getChanges(request, options = None):
payload = json.loads(request.args['payload'][0])
import urllib,datetime
fname = str(datetime.datetime.now()).replace(' ','_').replace(':','-')[:19]
open('github_{0}.json'.format(fname),'w').write(json.dumps(json.loads(urllib.unquote(request.args['payload'][0])), sort_keys = True, indent = 2))
# Github event debug
# open('github_{0}.json'.format(fname),'w').write(json.dumps(json.loads(urllib.unquote(request.args['payload'][0])), sort_keys = True, indent = 2))
if 'pull_request' in payload:
user = payload['pull_request']['user']['login']
@ -142,13 +143,13 @@ def process_change(payload, user, repo, repo_url, project):
'category' : 'github_pullrequest',
'who' : '{0} - PR#{1}'.format(user,payload['number']),
'files' : [],
'comments' : payload['pull_request']['title'],
'comments' : payload['pull_request']['title'],
'revision' : newrev,
'when' : convertTime(payload['pull_request']['updated_at']),
'branch' : branch,
'revlink' : '{0}/commit/{1}'.format(repo_url,newrev),
'repository' : repo_url,
'project' : project }]
'project' : project }]
return changes
for commit in payload['commits']:
files = []

View file

@ -1,4 +1,4 @@
import os
import os, re
from buildbot.buildslave import BuildSlave
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.schedulers.basic import SingleBranchScheduler
@ -6,127 +6,156 @@ from buildbot.schedulers.timed import Nightly
from buildbot.changes import filter
from buildbot.config import BuilderConfig
from buildbot.process.factory import BuildFactory
from buildbot.process.properties import Interpolate
from buildbot.process.properties import Property
from buildbot.steps.shell import ShellCommand
from buildbot.status import html, words
from buildbot.status.web import authz, auth
from buildbot.status.mail import MailNotifier
PORT_WEB = 80 # Buildbot webserver port
PORT_GITHUB = 8011 # Buildbot github hook port
PORT_MASTER = 9989 # Port where buildbot master listen buildworkers
TEST_USER = 'buildbot' # Credential to authenticate build triggers
TEST_PWD = 'docker' # Credential to authenticate build triggers
GITHUB_DOCKER = 'github.com/dotcloud/docker'
BUILDBOT_PATH = '/data/buildbot'
DOCKER_PATH = '/go/src/github.com/dotcloud/docker'
DOCKER_CI_PATH = '/docker-ci'
def ENV(x):
'''Promote an environment variable for global use returning its value'''
retval = os.environ.get(x, '')
globals()[x] = retval
return retval
class TestCommand(ShellCommand):
'''Extend ShellCommand with optional summary logs'''
def __init__(self, *args, **kwargs):
super(TestCommand, self).__init__(*args, **kwargs)
def createSummary(self, log):
exit_status = re.sub(r'.+\n\+ exit (\d+).+',
r'\1', log.getText()[-100:], flags=re.DOTALL)
if exit_status != '0':
return
# Infer coverage path from log
if '+ COVERAGE_PATH' in log.getText():
path = re.sub(r'.+\+ COVERAGE_PATH=((.+?)-\d+).+',
r'\2/\1', log.getText(), flags=re.DOTALL)
url = '{}coverage/{}/index.html'.format(c['buildbotURL'], path)
self.addURL('coverage', url)
elif 'COVERAGE_FILE' in log.getText():
path = re.sub(r'.+\+ COVERAGE_FILE=((.+?)-\d+).+',
r'\2/\1', log.getText(), flags=re.DOTALL)
url = '{}coverage/{}/index.html'.format(c['buildbotURL'], path)
self.addURL('coverage', url)
PORT_WEB = 8000 # Buildbot webserver port
PORT_GITHUB = 8011 # Buildbot github hook port
PORT_MASTER = 9989 # Port where buildbot master listen buildworkers
BUILDBOT_URL = '//localhost:{}/'.format(PORT_WEB)
DOCKER_REPO = 'https://github.com/docker-test/docker'
DOCKER_TEST_ARGV = 'HEAD {}'.format(DOCKER_REPO)
REGISTRY_REPO = 'https://github.com/docker-test/docker-registry'
REGISTRY_TEST_ARGV = 'HEAD {}'.format(REGISTRY_REPO)
if ENV('DEPLOYMENT') == 'staging':
BUILDBOT_URL = "//docker-ci-stage.docker.io/"
if ENV('DEPLOYMENT') == 'production':
BUILDBOT_URL = '//docker-ci.docker.io/'
DOCKER_REPO = 'https://github.com/dotcloud/docker'
DOCKER_TEST_ARGV = ''
REGISTRY_REPO = 'https://github.com/dotcloud/docker-registry'
REGISTRY_TEST_ARGV = ''
# Credentials set by setup.sh from deployment.py
BUILDBOT_PWD = ''
IRC_PWD = ''
IRC_CHANNEL = ''
SMTP_USER = ''
SMTP_PWD = ''
EMAIL_RCP = ''
ENV('WEB_USER')
ENV('WEB_IRC_PWD')
ENV('BUILDBOT_PWD')
ENV('SMTP_USER')
ENV('SMTP_PWD')
ENV('EMAIL_RCP')
ENV('IRC_CHANNEL')
c = BuildmasterConfig = {}
c['title'] = "Docker"
c['title'] = "docker-ci"
c['titleURL'] = "waterfall"
c['buildbotURL'] = "http://docker-ci.dotcloud.com/"
c['buildbotURL'] = BUILDBOT_URL
c['db'] = {'db_url':"sqlite:///state.sqlite"}
c['slaves'] = [BuildSlave('buildworker', BUILDBOT_PWD)]
c['slavePortnum'] = PORT_MASTER
# Schedulers
c['schedulers'] = [ForceScheduler(name='trigger', builderNames=['docker',
'index','registry','docker-coverage','registry-coverage','nightlyrelease'])]
c['schedulers'] += [SingleBranchScheduler(name="all", treeStableTimer=None,
c['schedulers'] = [ForceScheduler(name='trigger', builderNames=[
'docker', 'docker-registry', 'nightlyrelease', 'backup'])]
c['schedulers'] += [SingleBranchScheduler(name="docker", treeStableTimer=None,
change_filter=filter.ChangeFilter(branch='master',
repository='https://github.com/dotcloud/docker'), builderNames=['docker'])]
c['schedulers'] += [SingleBranchScheduler(name='pullrequest',
change_filter=filter.ChangeFilter(category='github_pullrequest'), treeStableTimer=None,
builderNames=['pullrequest'])]
c['schedulers'] += [Nightly(name='daily', branch=None, builderNames=['nightlyrelease',
'docker-coverage','registry-coverage'], hour=7, minute=00)]
c['schedulers'] += [Nightly(name='every4hrs', branch=None, builderNames=['registry','index'],
hour=range(0,24,4), minute=15)]
repository=DOCKER_REPO), builderNames=['docker'])]
c['schedulers'] += [SingleBranchScheduler(name="registry", treeStableTimer=None,
change_filter=filter.ChangeFilter(branch='master',
repository=REGISTRY_REPO), builderNames=['docker-registry'])]
c['schedulers'] += [SingleBranchScheduler(name='docker-pr', treeStableTimer=None,
change_filter=filter.ChangeFilter(category='github_pullrequest',
project='docker'), builderNames=['docker-pr'])]
c['schedulers'] += [SingleBranchScheduler(name='docker-registry-pr', treeStableTimer=None,
change_filter=filter.ChangeFilter(category='github_pullrequest',
project='docker-registry'), builderNames=['docker-registry-pr'])]
c['schedulers'] += [Nightly(name='daily', branch=None, builderNames=[
'nightlyrelease', 'backup'], hour=7, minute=00)]
# Builders
# Docker commit test
test_cmd = ('docker run -privileged mzdaniel/test_docker hack/dind'
' test_docker.sh %(src::revision)s')
# Backup
factory = BuildFactory()
factory.addStep(ShellCommand(description='Docker', logEnviron=False,
usePTY=True, command=["sh", "-c", Interpolate(test_cmd)]))
c['builders'] = [BuilderConfig(name='docker',slavenames=['buildworker'],
factory.addStep(TestCommand(description='backup', logEnviron=False,
usePTY=True, command='/docker-ci/tool/backup.py'))
c['builders'] = [BuilderConfig(name='backup',slavenames=['buildworker'],
factory=factory)]
# Docker test
factory = BuildFactory()
factory.addStep(TestCommand(description='docker', logEnviron=False,
usePTY=True, command='/docker-ci/dockertest/docker {}'.format(DOCKER_TEST_ARGV)))
c['builders'] += [BuilderConfig(name='docker',slavenames=['buildworker'],
factory=factory)]
# Docker pull request test
test_cmd = ('docker run -privileged mzdaniel/test_docker hack/dind'
' test_docker.sh %(src::revision)s %(src::repository)s %(src::branch)s')
factory = BuildFactory()
factory.addStep(ShellCommand(description='pull_request', logEnviron=False,
usePTY=True, command=["sh", "-c", Interpolate(test_cmd)]))
c['builders'] += [BuilderConfig(name='pullrequest',slavenames=['buildworker'],
factory.addStep(TestCommand(description='docker-pr', logEnviron=False,
usePTY=True, command=['/docker-ci/dockertest/docker',
Property('revision'), Property('repository'), Property('branch')]))
c['builders'] += [BuilderConfig(name='docker-pr',slavenames=['buildworker'],
factory=factory)]
# Docker coverage test
# docker-registry test
factory = BuildFactory()
factory.addStep(ShellCommand(description='docker-coverage', logEnviron=False,
usePTY=True, command='{0}/docker-coverage/coverage-docker.sh'.format(
DOCKER_CI_PATH)))
c['builders'] += [BuilderConfig(name='docker-coverage',slavenames=['buildworker'],
factory.addStep(TestCommand(description='docker-registry', logEnviron=False,
usePTY=True, command='/docker-ci/dockertest/docker-registry {}'.format(REGISTRY_TEST_ARGV)))
c['builders'] += [BuilderConfig(name='docker-registry',slavenames=['buildworker'],
factory=factory)]
# Docker registry coverage test
# Docker registry pull request test
factory = BuildFactory()
factory.addStep(ShellCommand(description='registry-coverage', logEnviron=False,
usePTY=True, command='docker run registry_coverage'.format(
DOCKER_CI_PATH)))
c['builders'] += [BuilderConfig(name='registry-coverage',slavenames=['buildworker'],
factory=factory)]
# Registry functional test
factory = BuildFactory()
factory.addStep(ShellCommand(description='registry', logEnviron=False,
command='. {0}/master/credentials.cfg; '
'{1}/functionaltests/test_registry.sh'.format(BUILDBOT_PATH, DOCKER_CI_PATH),
usePTY=True))
c['builders'] += [BuilderConfig(name='registry',slavenames=['buildworker'],
factory=factory)]
# Index functional test
factory = BuildFactory()
factory.addStep(ShellCommand(description='index', logEnviron=False,
command='. {0}/master/credentials.cfg; '
'{1}/functionaltests/test_index.py'.format(BUILDBOT_PATH, DOCKER_CI_PATH),
usePTY=True))
c['builders'] += [BuilderConfig(name='index',slavenames=['buildworker'],
factory.addStep(TestCommand(description='docker-registry-pr', logEnviron=False,
usePTY=True, command=['/docker-ci/dockertest/docker-registry',
Property('revision'), Property('repository'), Property('branch')]))
c['builders'] += [BuilderConfig(name='docker-registry-pr',slavenames=['buildworker'],
factory=factory)]
# Docker nightly release
nightlyrelease_cmd = ('docker version; docker run -i -t -privileged -e AWS_S3_BUCKET='
'test.docker.io dockerbuilder hack/dind dockerbuild.sh')
factory = BuildFactory()
factory.addStep(ShellCommand(description='NightlyRelease',logEnviron=False,
usePTY=True, command=nightlyrelease_cmd))
usePTY=True, command=['/docker-ci/dockertest/nightlyrelease']))
c['builders'] += [BuilderConfig(name='nightlyrelease',slavenames=['buildworker'],
factory=factory)]
# Status
authz_cfg = authz.Authz(auth=auth.BasicAuth([(TEST_USER, TEST_PWD)]),
authz_cfg = authz.Authz(auth=auth.BasicAuth([(WEB_USER, WEB_IRC_PWD)]),
forceBuild='auth')
c['status'] = [html.WebStatus(http_port=PORT_WEB, authz=authz_cfg)]
c['status'].append(html.WebStatus(http_port=PORT_GITHUB, allowForce=True,
change_hook_dialects={ 'github': True }))
c['status'].append(MailNotifier(fromaddr='buildbot@docker.io',
c['status'].append(MailNotifier(fromaddr='docker-test@docker.io',
sendToInterestedUsers=False, extraRecipients=[EMAIL_RCP],
mode='failing', relayhost='smtp.mailgun.org', smtpPort=587, useTls=True,
smtpUser=SMTP_USER, smtpPassword=SMTP_PWD))
c['status'].append(words.IRC("irc.freenode.net", "dockerqabot",
channels=[IRC_CHANNEL], password=IRC_PWD, allowForce=True,
channels=[IRC_CHANNEL], password=WEB_IRC_PWD, allowForce=True,
notify_events={'exception':1, 'successToFailure':1, 'failureToSuccess':1}))

View file

@ -1,9 +0,0 @@
sqlalchemy<=0.7.9
sqlalchemy-migrate>=0.7.2
buildbot==0.8.7p1
buildbot_slave==0.8.7p1
nose==1.2.1
requests==1.1.0
flask==0.10.1
simplejson==2.3.2
selenium==2.35.0

View file

@ -1,59 +0,0 @@
#!/usr/bin/env bash
# Setup of buildbot configuration. Package installation is being done by
# Vagrantfile
# Dependencies: buildbot, buildbot-slave, supervisor
USER=$1
CFG_PATH=$2
DOCKER_PATH=$3
BUILDBOT_PWD=$4
IRC_PWD=$5
IRC_CHANNEL=$6
SMTP_USER=$7
SMTP_PWD=$8
EMAIL_RCP=$9
REGISTRY_USER=${10}
REGISTRY_PWD=${11}
REGISTRY_BUCKET=${12}
REGISTRY_ACCESS_KEY=${13}
REGISTRY_SECRET_KEY=${14}
BUILDBOT_PATH="/data/buildbot"
SLAVE_NAME="buildworker"
SLAVE_SOCKET="localhost:9989"
export PATH="/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin"
function run { su $USER -c "$1"; }
# Exit if buildbot has already been installed
[ -d "$BUILDBOT_PATH" ] && exit 0
# Setup buildbot
run "mkdir -p $BUILDBOT_PATH"
cd $BUILDBOT_PATH
run "buildbot create-master master"
run "cp $CFG_PATH/master.cfg master"
run "sed -i -E 's#(BUILDBOT_PWD = ).+#\1\"$BUILDBOT_PWD\"#' master/master.cfg"
run "sed -i -E 's#(IRC_PWD = ).+#\1\"$IRC_PWD\"#' master/master.cfg"
run "sed -i -E 's#(IRC_CHANNEL = ).+#\1\"$IRC_CHANNEL\"#' master/master.cfg"
run "sed -i -E 's#(SMTP_USER = ).+#\1\"$SMTP_USER\"#' master/master.cfg"
run "sed -i -E 's#(SMTP_PWD = ).+#\1\"$SMTP_PWD\"#' master/master.cfg"
run "sed -i -E 's#(EMAIL_RCP = ).+#\1\"$EMAIL_RCP\"#' master/master.cfg"
run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
run "echo 'export DOCKER_CREDS=\"$REGISTRY_USER:$REGISTRY_PWD\"' > $BUILDBOT_PATH/master/credentials.cfg"
run "echo 'export S3_BUCKET=\"$REGISTRY_BUCKET\"' >> $BUILDBOT_PATH/master/credentials.cfg"
run "echo 'export S3_ACCESS_KEY=\"$REGISTRY_ACCESS_KEY\"' >> $BUILDBOT_PATH/master/credentials.cfg"
run "echo 'export S3_SECRET_KEY=\"$REGISTRY_SECRET_KEY\"' >> $BUILDBOT_PATH/master/credentials.cfg"
# Patch github webstatus to capture pull requests
cp $CFG_PATH/github.py /usr/local/lib/python2.7/dist-packages/buildbot/status/web/hooks
# Allow buildbot subprocesses (docker tests) to properly run in containers,
# in particular with docker -u
run "sed -i 's/^umask = None/umask = 000/' slave/buildbot.tac"
# Setup supervisor
cp $CFG_PATH/buildbot.conf /etc/supervisor/conf.d/buildbot.conf
sed -i -E "s/^chmod=0700.+/chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf
kill -HUP $(pgrep -f "/usr/bin/python /usr/bin/supervisord")

View file

@ -0,0 +1,22 @@
docker-ci:
image: "docker-ci/docker-ci"
release_name: "docker-ci-0.5.6"
ports: ["80","2222:22","8011:8011"]
register: "80"
volumes: ["/run:/var/socket","/home/docker-ci:/data/docker-ci"]
command: []
env:
- "DEPLOYMENT=production"
- "IRC_CHANNEL=docker-testing"
- "BACKUP_BUCKET=backup-ci"
- "$WEB_USER"
- "$WEB_IRC_PWD"
- "$BUILDBOT_PWD"
- "$AWS_ACCESS_KEY"
- "$AWS_SECRET_KEY"
- "$GPG_PASSPHRASE"
- "$BACKUP_AWS_ID"
- "$BACKUP_AWS_SECRET"
- "$SMTP_USER"
- "$SMTP_PWD"
- "$EMAIL_RCP"

View file

@ -0,0 +1,5 @@
default:
hipaches: ['192.168.100.67:6379']
daemons: ['192.168.100.67:4243']
use_ssh: False

View file

@ -0,0 +1,22 @@
docker-ci:
image: "docker-ci/docker-ci"
release_name: "docker-ci-stage"
ports: ["80","2222:22","8011:8011"]
register: "80"
volumes: ["/run:/var/socket","/home/docker-ci:/data/docker-ci"]
command: []
env:
- "DEPLOYMENT=staging"
- "IRC_CHANNEL=docker-testing-staging"
- "BACKUP_BUCKET=ci-backup-stage"
- "$BACKUP_AWS_ID"
- "$BACKUP_AWS_SECRET"
- "$WEB_USER"
- "$WEB_IRC_PWD"
- "$BUILDBOT_PWD"
- "$AWS_ACCESS_KEY"
- "$AWS_SECRET_KEY"
- "$GPG_PASSPHRASE"
- "$SMTP_USER"
- "$SMTP_PWD"
- "$EMAIL_RCP"

View file

@ -0,0 +1,5 @@
default:
hipaches: ['192.168.100.65:6379']
daemons: ['192.168.100.65:4243']
use_ssh: False

View file

@ -1,171 +0,0 @@
#!/usr/bin/env python
import os, sys, re, json, requests, base64
from subprocess import call
from fabric import api
from fabric.api import cd, run, put, sudo
from os import environ as env
from datetime import datetime
from time import sleep
# Remove SSH private key as it needs more processing
CONFIG = json.loads(re.sub(r'("DOCKER_CI_KEY".+?"(.+?)",)','',
env['CONFIG_JSON'], flags=re.DOTALL))
# Populate environment variables
for key in CONFIG:
env[key] = CONFIG[key]
# Load SSH private key
env['DOCKER_CI_KEY'] = re.sub('^.+"DOCKER_CI_KEY".+?"(.+?)".+','\\1',
env['CONFIG_JSON'],flags=re.DOTALL)
DROPLET_NAME = env.get('DROPLET_NAME','docker-ci')
TIMEOUT = 120 # Seconds before timeout droplet creation
IMAGE_ID = 1004145 # Docker on Ubuntu 13.04
REGION_ID = 4 # New York 2
SIZE_ID = 62 # memory 2GB
DO_IMAGE_USER = 'root' # Image user on Digital Ocean
API_URL = 'https://api.digitalocean.com/'
DOCKER_PATH = '/go/src/github.com/dotcloud/docker'
DOCKER_CI_PATH = '/docker-ci'
CFG_PATH = '{}/buildbot'.format(DOCKER_CI_PATH)
class DigitalOcean():
def __init__(self, key, client):
'''Set default API parameters'''
self.key = key
self.client = client
self.api_url = API_URL
def api(self, cmd_path, api_arg={}):
'''Make api call'''
api_arg.update({'api_key':self.key, 'client_id':self.client})
resp = requests.get(self.api_url + cmd_path, params=api_arg).text
resp = json.loads(resp)
if resp['status'] != 'OK':
raise Exception(resp['error_message'])
return resp
def droplet_data(self, name):
'''Get droplet data'''
data = self.api('droplets')
data = [droplet for droplet in data['droplets']
if droplet['name'] == name]
return data[0] if data else {}
def json_fmt(data):
'''Format json output'''
return json.dumps(data, sort_keys = True, indent = 2)
do = DigitalOcean(env['DO_API_KEY'], env['DO_CLIENT_ID'])
# Get DROPLET_NAME data
data = do.droplet_data(DROPLET_NAME)
# Stop processing if DROPLET_NAME exists on Digital Ocean
if data:
print ('Droplet: {} already deployed. Not further processing.'
.format(DROPLET_NAME))
exit(1)
# Create droplet
do.api('droplets/new', {'name':DROPLET_NAME, 'region_id':REGION_ID,
'image_id':IMAGE_ID, 'size_id':SIZE_ID,
'ssh_key_ids':[env['DOCKER_KEY_ID']]})
# Wait for droplet to be created.
start_time = datetime.now()
while (data.get('status','') != 'active' and (
datetime.now()-start_time).seconds < TIMEOUT):
data = do.droplet_data(DROPLET_NAME)
print data['status']
sleep(3)
# Wait for the machine to boot
sleep(15)
# Get droplet IP
ip = str(data['ip_address'])
print 'droplet: {} ip: {}'.format(DROPLET_NAME, ip)
# Create docker-ci ssh private key so docker-ci docker container can communicate
# with its EC2 instance
os.makedirs('/root/.ssh')
open('/root/.ssh/id_rsa','w').write(env['DOCKER_CI_KEY'])
os.chmod('/root/.ssh/id_rsa',0600)
open('/root/.ssh/config','w').write('StrictHostKeyChecking no\n')
api.env.host_string = ip
api.env.user = DO_IMAGE_USER
api.env.key_filename = '/root/.ssh/id_rsa'
# Correct timezone
sudo('echo "America/Los_Angeles" >/etc/timezone')
sudo('dpkg-reconfigure --frontend noninteractive tzdata')
# Load public docker-ci key
sudo("echo '{}' >> /root/.ssh/authorized_keys".format(env['DOCKER_CI_PUB']))
# Create docker nightly release credentials file
credentials = {
'AWS_ACCESS_KEY': env['PKG_ACCESS_KEY'],
'AWS_SECRET_KEY': env['PKG_SECRET_KEY'],
'GPG_PASSPHRASE': env['PKG_GPG_PASSPHRASE']}
open(DOCKER_CI_PATH + '/nightlyrelease/release_credentials.json', 'w').write(
base64.b64encode(json.dumps(credentials)))
# Transfer docker
sudo('mkdir -p ' + DOCKER_CI_PATH)
sudo('chown {}.{} {}'.format(DO_IMAGE_USER, DO_IMAGE_USER, DOCKER_CI_PATH))
call('/usr/bin/rsync -aH {} {}@{}:{}'.format(DOCKER_CI_PATH, DO_IMAGE_USER, ip,
os.path.dirname(DOCKER_CI_PATH)), shell=True)
# Install Docker and Buildbot dependencies
sudo('mkdir /mnt/docker; ln -s /mnt/docker /var/lib/docker')
sudo('apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9')
sudo('echo deb https://get.docker.io/ubuntu docker main >'
' /etc/apt/sources.list.d/docker.list')
sudo('echo -e "deb http://archive.ubuntu.com/ubuntu raring main universe\n'
'deb http://us.archive.ubuntu.com/ubuntu/ raring-security main universe\n"'
' > /etc/apt/sources.list; apt-get update')
sudo('DEBIAN_FRONTEND=noninteractive apt-get install -q -y wget python-dev'
' python-pip supervisor git mercurial linux-image-extra-$(uname -r)'
' aufs-tools make libfontconfig libevent-dev libsqlite3-dev libssl-dev')
sudo('wget -O - https://go.googlecode.com/files/go1.2.linux-amd64.tar.gz | '
'tar -v -C /usr/local -xz; ln -s /usr/local/go/bin/go /usr/bin/go')
sudo('GOPATH=/go go get -d github.com/dotcloud/docker')
sudo('pip install -r {}/requirements.txt'.format(CFG_PATH))
# Install docker and testing dependencies
sudo('apt-get install -y -q lxc-docker')
sudo('curl -s https://phantomjs.googlecode.com/files/'
'phantomjs-1.9.1-linux-x86_64.tar.bz2 | tar jx -C /usr/bin'
' --strip-components=2 phantomjs-1.9.1-linux-x86_64/bin/phantomjs')
# Build docker-ci containers
sudo('cd {}; docker build -t docker .'.format(DOCKER_PATH))
sudo('cd {}; docker build -t docker-ci .'.format(DOCKER_CI_PATH))
sudo('cd {}/nightlyrelease; docker build -t dockerbuilder .'.format(
DOCKER_CI_PATH))
sudo('cd {}/registry-coverage; docker build -t registry_coverage .'.format(
DOCKER_CI_PATH))
# Download docker-ci testing container
sudo('docker pull mzdaniel/test_docker')
# Setup buildbot
sudo('mkdir /data')
sudo('{0}/setup.sh root {0} {1} {2} {3} {4} {5} {6} {7} {8} {9} {10}'
' {11} {12}'.format(CFG_PATH, DOCKER_PATH, env['BUILDBOT_PWD'],
env['IRC_PWD'], env['IRC_CHANNEL'], env['SMTP_USER'],
env['SMTP_PWD'], env['EMAIL_RCP'], env['REGISTRY_USER'],
env['REGISTRY_PWD'], env['REGISTRY_BUCKET'], env['REGISTRY_ACCESS_KEY'],
env['REGISTRY_SECRET_KEY']))
# Preventively reboot docker-ci daily
sudo('ln -s /sbin/reboot /etc/cron.daily')

View file

@ -1,32 +0,0 @@
#!/usr/bin/env bash
set -x
# Generate a random string of $1 characters
function random {
cat /dev/urandom | tr -cd 'a-f0-9' | head -c $1
}
# Compute test paths
BASE_PATH=`pwd`/test_docker_$(random 12)
DOCKER_PATH=$BASE_PATH/go/src/github.com/dotcloud/docker
export GOPATH=$BASE_PATH/go:$DOCKER_PATH/vendor
# Fetch latest master
mkdir -p $DOCKER_PATH
cd $DOCKER_PATH
git init .
git fetch -q http://github.com/dotcloud/docker master
git reset --hard FETCH_HEAD
# Fetch go coverage
cd $BASE_PATH/go
GOPATH=$BASE_PATH/go go get github.com/axw/gocov/gocov
sudo -E GOPATH=$GOPATH ./bin/gocov test -deps -exclude-goroot -v\
-exclude github.com/gorilla/context,github.com/gorilla/mux,github.com/kr/pty,\
code.google.com/p/go.net/websocket\
github.com/dotcloud/docker | ./bin/gocov report; exit_status=$?
# Cleanup testing directory
rm -rf $BASE_PATH
exit $exit_status

View file

@ -1,25 +0,0 @@
# VERSION: 0.4
# DOCKER-VERSION 0.6.6
# AUTHOR: Daniel Mizyrycki <daniel@docker.com>
# DESCRIPTION: Testing docker PRs and commits on top of master using
# REFERENCES: This code reuses the excellent implementation of
# Docker in Docker made by Jerome Petazzoni.
# https://github.com/jpetazzo/dind
# COMMENTS:
# This Dockerfile adapts /Dockerfile to enable docker PRs and commits testing
# Optional arguments:
# [commit] (default: 'HEAD')
# [repo] (default: 'http://github.com/dotcloud/docker')
# [branch] (default: 'master')
# TO_BUILD: docker build -t test_docker .
# TO_RUN: docker run -privileged test_docker hack/dind test_docker.sh [commit] [repo] [branch]
from docker
maintainer Daniel Mizyrycki <daniel@docker.com>
# Setup go in PATH. Extracted from /Dockerfile
env PATH /usr/local/go/bin:$PATH
# Add test_docker.sh
add test_docker.sh /usr/bin/test_docker.sh
run chmod +x /usr/bin/test_docker.sh

View file

@ -1,33 +0,0 @@
#!/usr/bin/env bash
set -x
COMMIT=${1-HEAD}
REPO=${2-http://github.com/dotcloud/docker}
BRANCH=${3-master}
# Compute test paths
DOCKER_PATH=/go/src/github.com/dotcloud/docker
# Timestamp
echo
date; echo
# Fetch latest master
cd /
rm -rf /go
git clone -q -b master http://github.com/dotcloud/docker $DOCKER_PATH
cd $DOCKER_PATH
# Merge commit
git fetch -q "$REPO" "$BRANCH"
git merge --no-edit $COMMIT || exit 255
# Test commit
./hack/make.sh test; exit_status=$?
# Display load if test fails
if [ $exit_status -ne 0 ] ; then
uptime; echo; free
fi
exit $exit_status

View file

@ -0,0 +1 @@
project

View file

@ -0,0 +1 @@
project

View file

@ -0,0 +1,13 @@
#!/usr/bin/env bash
if [ "$DEPLOYMENT" == "production" ]; then
AWS_S3_BUCKET='test.docker.io'
else
AWS_S3_BUCKET='get-staging.docker.io'
fi
docker run -rm -privileged -v /run:/var/socket \
-e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY=$AWS_ACCESS_KEY \
-e AWS_SECRET_KEY=$AWS_SECRET_KEY -e GPG_PASSPHRASE=$GPG_PASSPHRASE \
-e DOCKER_RELEASE=1 -e DEPLOYMENT=$DEPLOYMENT docker-ci/testbuilder docker

View file

@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -x
PROJECT_NAME=$(basename $0)
docker run -rm -u sysadmin -e DEPLOYMENT=$DEPLOYMENT -v /run:/var/socket \
-v /home/docker-ci/coverage/$PROJECT_NAME:/data docker-ci/testbuilder $PROJECT_NAME $1 $2 $3

View file

@ -0,0 +1,12 @@
server {
listen 80;
root /data/docker-ci;
location / {
proxy_pass http://localhost:8000/;
}
location /coverage {
root /data/docker-ci;
}
}

View file

@ -1,30 +0,0 @@
# VERSION: 1.6
# DOCKER-VERSION 0.6.6
# AUTHOR: Daniel Mizyrycki <daniel@docker.com>
# DESCRIPTION: Build docker nightly release using Docker in Docker.
# REFERENCES: This code reuses the excellent implementation of docker in docker
# made by Jerome Petazzoni. https://github.com/jpetazzo/dind
# COMMENTS:
# release_credentials.json is a base64 json encoded file containing:
# { "AWS_ACCESS_KEY": "Test_docker_AWS_S3_bucket_id",
# "AWS_SECRET_KEY": "Test_docker_AWS_S3_bucket_key",
# "GPG_PASSPHRASE": "Test_docker_GPG_passphrase_signature" }
# TO_BUILD: docker build -t dockerbuilder .
# TO_RELEASE: docker run -i -t -privileged -e AWS_S3_BUCKET="test.docker.io" dockerbuilder hack/dind dockerbuild.sh
from docker
maintainer Daniel Mizyrycki <daniel@docker.com>
# Add docker dependencies and downloading packages
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
run apt-get update; apt-get install -y -q wget python2.7
# Add production docker binary
run wget -q -O /usr/bin/docker http://get.docker.io/builds/Linux/x86_64/docker-latest; chmod +x /usr/bin/docker
# Add proto docker builder
add ./dockerbuild.sh /usr/bin/dockerbuild.sh
run chmod +x /usr/bin/dockerbuild.sh
# Add release credentials
add ./release_credentials.json /root/release_credentials.json

View file

@ -1,40 +0,0 @@
#!/usr/bin/env bash
# Variables AWS_ACCESS_KEY, AWS_SECRET_KEY and PG_PASSPHRASE are decoded
# from /root/release_credentials.json
# Variable AWS_S3_BUCKET is passed to the environment from docker run -e
# Turn debug off to load credentials from the environment
set +x
eval $(cat /root/release_credentials.json | python -c '
import sys,json,base64;
d=json.loads(base64.b64decode(sys.stdin.read()));
exec("""for k in d: print "export {0}=\\"{1}\\"".format(k,d[k])""")')
# Fetch docker master branch
set -x
cd /
rm -rf /go
git clone -q -b master http://github.com/dotcloud/docker /go/src/github.com/dotcloud/docker
cd /go/src/github.com/dotcloud/docker
# Launch docker daemon using dind inside the container
/usr/bin/docker version
/usr/bin/docker -d &
sleep 5
# Build Docker release container
docker build -t docker .
# Test docker and if everything works well, release
echo docker run -i -t -privileged -e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY=XXXXX -e AWS_SECRET_KEY=XXXXX -e GPG_PASSPHRASE=XXXXX docker hack/release.sh
set +x
docker run -privileged -i -t -e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY=$AWS_ACCESS_KEY -e AWS_SECRET_KEY=$AWS_SECRET_KEY -e GPG_PASSPHRASE=$GPG_PASSPHRASE docker hack/release.sh
exit_status=$?
# Display load if test fails
set -x
if [ $exit_status -ne 0 ] ; then
uptime; echo; free
exit 1
fi

View file

@ -1,18 +0,0 @@
# VERSION: 0.1
# DOCKER-VERSION 0.6.4
# AUTHOR: Daniel Mizyrycki <daniel@dotcloud.com>
# DESCRIPTION: Docker registry coverage
# COMMENTS: Add registry coverage into the docker-ci image
# TO_BUILD: docker build -t registry_coverage .
# TO_RUN: docker run registry_coverage
from docker-ci
maintainer Daniel Mizyrycki <daniel@dotcloud.com>
# Add registry_coverager.sh and dependencies
run pip install coverage flask pyyaml requests simplejson python-glanceclient \
blinker redis boto gevent rsa mock
add registry_coverage.sh /usr/bin/registry_coverage.sh
run chmod +x /usr/bin/registry_coverage.sh
cmd "/usr/bin/registry_coverage.sh"

View file

@ -1,18 +0,0 @@
#!/usr/bin/env bash
set -x
# Setup the environment
REGISTRY_PATH=/data/docker-registry
export SETTINGS_FLAVOR=test
export DOCKER_REGISTRY_CONFIG=config_test.yml
export PYTHONPATH=$REGISTRY_PATH/test
# Fetch latest docker-registry master
rm -rf $REGISTRY_PATH
git clone https://github.com/dotcloud/docker-registry -b master $REGISTRY_PATH
cd $REGISTRY_PATH
# Generate coverage
coverage run -m unittest discover test || exit 1
coverage report --include='./*' --omit='./test/*'

View file

@ -0,0 +1,54 @@
#!/usr/bin/env bash
# Set timezone
echo "GMT" >/etc/timezone
dpkg-reconfigure --frontend noninteractive tzdata
# Set ssh superuser
mkdir -p /data/buildbot /var/run/sshd /run
useradd -m -d /home/sysadmin -s /bin/bash -G sudo,docker -p '*' sysadmin
sed -Ei 's/(\%sudo.*) ALL/\1 NOPASSWD:ALL/' /etc/sudoers
cd /home/sysadmin
mkdir .ssh
chmod 700 .ssh
cat > .ssh/authorized_keys << 'EOF'
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7ALVhwQ68q1SjrKaAduOuOEAcWmb8kDZf5qA7T1fM8AP07EDC7nSKRJ8PXUBGTOQfxm89coJDuSJsTAZ+1PvglXhA0Mq6+knc6ZrZY+SuZlDIDAk4TOdVPoDZnmR1YW2McxHkhcGIOKeC8MMig5NeEjtgQwXzauUSPqeh8HMlLZRMooFYyyluIpn7NaCLzyWjwAQz2s3KyI7VE7hl+ncCrW86v+dciEdwqtzNoUMFb3iDpPxaiCl3rv+SB7co/5eUDTs1FZvUcYMXKQuf8R+2ZKzXOpwr0Zs8sKQXvXavCeWykwGgXLBjVkvrDcHuDD6UXCW63UKgmRECpLZaMBVIIRWLEEgTS5OSQTcxpMVe5zUW6sDvXHTcdPwWrcn1dE9F/0vLC0HJ4ADKelLX5zyTpmXGbuZuntIf1JO67D/K/P++uV1rmVIH+zgtOf23w5rX2zKb4BSTqP0sv61pmWV7MEVoEz6yXswcTjS92tb775v7XLU9vKAkt042ORFdE4/++hejhL/Lj52IRgjt1CJZHZsR9JywJZrz3kYuf8eU2J2FYh0Cpz5gmf0f+12Rt4HztnZxGPP4KuMa66e4+hpx1jynjMZ7D5QUnNYEmuvJByopn8HSluuY/kS5MMyZCZtJLEPGX4+yECX0Di/S0vCRl2NyqfCBqS+yXXT5SA1nFw== docker-test@docker.io
EOF
chmod 600 .ssh/authorized_keys
chown -R sysadmin .ssh
# Fix docker group id for use of host dockerd by sysadmin
sed -Ei 's/(docker:x:)[^:]+/\1999/' /etc/group
# Create buildbot configuration
cd /data/buildbot; buildbot create-master master
cp -a /data/buildbot/master/master.cfg.sample \
/data/buildbot/master/master.cfg
cd /data/buildbot; \
buildslave create-slave slave localhost:9989 buildworker pass
cp /docker-ci/buildbot/master.cfg /data/buildbot/master
# Patch github webstatus to capture pull requests
cp /docker-ci/buildbot/github.py /usr/local/lib/python2.7/dist-packages/buildbot/status/web/hooks
chown -R sysadmin.sysadmin /data
# Create nginx configuration
rm /etc/nginx/sites-enabled/default
cp /docker-ci/nginx/nginx.conf /etc/nginx/conf.d/buildbot.conf
/bin/echo -e '\ndaemon off;\n' >> /etc/nginx/nginx.conf
# Set supervisord buildbot, nginx and sshd processes
/bin/echo -e "\
[program:buildmaster]\n\
command=twistd --nodaemon --no_save -y buildbot.tac\n\
directory=/data/buildbot/master\n\
user=sysadmin\n\n\
[program:buildworker]\n\
command=twistd --nodaemon --no_save -y buildbot.tac\n\
directory=/data/buildbot/slave\n\
user=sysadmin\n" > \
/etc/supervisor/conf.d/buildbot.conf
/bin/echo -e "[program:nginx]\ncommand=/usr/sbin/nginx\n" > \
/etc/supervisor/conf.d/nginx.conf
/bin/echo -e "[program:sshd]\ncommand=/usr/sbin/sshd -D\n" > \
/etc/supervisor/conf.d/sshd.conf

View file

@ -0,0 +1,12 @@
# TO_BUILD: docker build -rm -no-cache -t docker-ci/testbuilder .
# TO_RUN: docker run -rm -u sysadmin \
# -v /run:/var/socket docker-ci/testbuilder docker-registry
#
FROM docker-ci/docker-ci
ENV HOME /home/sysadmin
RUN mkdir /testbuilder
ADD . /testbuilder
ENTRYPOINT ["/testbuilder/testbuilder.sh"]

View file

@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -x
set -e
PROJECT_PATH=$1
# Build the docker project
cd /data/$PROJECT_PATH
sg docker -c "docker build -q -rm -t registry ."
cd test; sg docker -c "docker build -q -rm -t docker-registry-test ."
# Run the tests
sg docker -c "docker run -rm -v /home/docker-ci/coverage/docker-registry:/data docker-registry-test"

View file

@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -x
set -e
PROJECT_PATH=$1
# Build the docker project
cd /data/$PROJECT_PATH
sg docker -c "docker build -q -rm -t docker ."
if [ "$DOCKER_RELEASE" == "1" ]; then
# Do nightly release
echo sg docker -c "docker run -rm -privileged -v /run:/var/socket -e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY= -e AWS_SECRET_KEY= -e GPG_PASSPHRASE= docker hack/release.sh"
set +x
sg docker -c "docker run -rm -privileged -v /run:/var/socket -e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY=$AWS_ACCESS_KEY -e AWS_SECRET_KEY=$AWS_SECRET_KEY -e GPG_PASSPHRASE=$GPG_PASSPHRASE docker hack/release.sh"
else
# Run the tests
sg docker -c "docker run -rm -privileged -v /home/docker-ci/coverage/docker:/data docker ./hack/infrastructure/docker-ci/docker-coverage/gocoverage.sh"
fi

View file

@ -0,0 +1,40 @@
#!/usr/bin/env bash
# Download, build and run a docker project tests
# Environment variables: DEPLOYMENT
cat $0
set -e
set -x
PROJECT=$1
COMMIT=${2-HEAD}
REPO=${3-https://github.com/dotcloud/$PROJECT}
BRANCH=${4-master}
REPO_PROJ="https://github.com/docker-test/$PROJECT"
if [ "$DEPLOYMENT" == "production" ]; then
REPO_PROJ="https://github.com/dotcloud/$PROJECT"
fi
set +x
# Generate a random string of $1 characters
function random {
cat /dev/urandom | tr -cd 'a-f0-9' | head -c $1
}
PROJECT_PATH="$PROJECT-tmp-$(random 12)"
# Set docker-test git user
set -x
git config --global user.email "docker-test@docker.io"
git config --global user.name "docker-test"
# Fetch project
git clone -q $REPO_PROJ -b master /data/$PROJECT_PATH
cd /data/$PROJECT_PATH
echo "Git commit: $(git rev-parse HEAD)"
git fetch -q $REPO $BRANCH
git merge --no-edit $COMMIT
# Build the project dockertest
/testbuilder/$PROJECT.sh $PROJECT_PATH
rm -rf /data/$PROJECT_PATH

View file

@ -0,0 +1,47 @@
#!/usr/bin/env python
import os,sys,json
from datetime import datetime
from filecmp import cmp
from subprocess import check_call
from boto.s3.key import Key
from boto.s3.connection import S3Connection
def ENV(x):
'''Promote an environment variable for global use returning its value'''
retval = os.environ.get(x, '')
globals()[x] = retval
return retval
ROOT_PATH = '/data/backup/docker-ci'
TODAY = str(datetime.today())[:10]
BACKUP_FILE = '{}/docker-ci_{}.tgz'.format(ROOT_PATH, TODAY)
BACKUP_LINK = '{}/docker-ci.tgz'.format(ROOT_PATH)
ENV('BACKUP_BUCKET')
ENV('BACKUP_AWS_ID')
ENV('BACKUP_AWS_SECRET')
'''Create full master buildbot backup, avoiding duplicates'''
# Ensure backup path exist
if not os.path.exists(ROOT_PATH):
os.makedirs(ROOT_PATH)
# Make actual backups
check_call('/bin/tar czf {} -C /data --exclude=backup --exclude=buildbot/slave'
' . 1>/dev/null 2>&1'.format(BACKUP_FILE),shell=True)
# remove previous dump if it is the same as the latest
if (os.path.exists(BACKUP_LINK) and cmp(BACKUP_FILE, BACKUP_LINK) and
os.path._resolve_link(BACKUP_LINK) != BACKUP_FILE):
os.unlink(os.path._resolve_link(BACKUP_LINK))
# Recreate backup link pointing to latest backup
try:
os.unlink(BACKUP_LINK)
except:
pass
os.symlink(BACKUP_FILE, BACKUP_LINK)
# Make backup on S3
bucket = S3Connection(BACKUP_AWS_ID,BACKUP_AWS_SECRET).get_bucket(BACKUP_BUCKET)
k = Key(bucket)
k.key = BACKUP_FILE
k.set_contents_from_filename(BACKUP_FILE)
bucket.copy_key(os.path.basename(BACKUP_LINK),BACKUP_BUCKET,BACKUP_FILE[1:])