Overview

Starphleet is a toolkit for turning virtual or physical machine infrastructure into a continuous deployment stack, running multiple Git-backed services on one or more nodes via Linux containers.

Starphleet borrows heavily from the concepts of the Twelve-Factor App, and uses an approach that avoids many of the problems inherent in existing autodeployment solutions:

  • Conventional virtualization, with multiple operating systems running on shared physical hardware, wastes resources, specifically RAM and CPU. This costs real money.
  • Autodeploy PaaS has the same vendor lock-in risks of old proprietary software.
  • Continuous deployment is almost always a custom scripting exercise.
  • Multiple machine / clustered deployment is extra work.
  • Making many small services is more work than making megalith services.
  • Deployment systems all seem to be at the system not service level.
  • Every available autodeploy system requires that you set up servers to deploy your servers, which themselves aren't autodeployed.

Features

  • Linux CGROUP and container isolation
  • Continuous Deployment
  • Centralized Configuration
  • Easily combine many micro-services into a large application

Installation

EC2 in AWS

Starphleet includes Amazon Web Services (AWS) support. To initialize your phleet, you need to have an AWS account.

  1. Provision an Ubuntu EC2 Instance (Currently 14.04)
  2. Login to your provisioned instance
  3. Run the following command:
bash -c "$(curl -s https://raw.githubusercontent.com/wballard/starphleet/master/webinstall)" 

VMWare Fusion (Mac)

To install on a mac computer you will need the following software

Once all the software is installed you can install starphleet into vmware with the following steps

  1. Clone the GitHub repo locally
  2. Change into the local directory
  3. Run the following command:
${PWD}/vmware 

Definitions

Phleet

A phleet is a grouping of starphleet ships. The entire phleet points to a single headquarters. These ships may be geographically located for latency or maintained in a single region. A phleet is comprised of machines that are intended to be identical for easy scaling.

Ship

A ship is an Ubuntu 14.04 EC2 instance running a base install of Ubuntu. The starphleet installation will handle installing all needed software on the server.

Headquarters

The headquarters is a special Git repo that is the brains of your phleet. The Headquarters will store all your environment and configuration options. This repo will also contain all your credentials and confidential information. It is important to keep your headquarter security restrictive. An example and template headquarters can be found here.

Orders

The orders file is a special file inside a subdirectory off the root directory of the headquarters.

Each subdirectory off the root of the headquarters is checked for an orders file. Starphleet will look inside the orders file for deploy commands and environment variables for the service. There is a 1-1 mapping between the subdirectory that contains an orders file and the url of the service deployed.

Service

A service is an app that responds to web requests. The service should listen for web requests on the $PORT set in the Environment. The service lives within an LXC Container and will be handed requests by the NGINX front end on the ship.

Remote

Remotes are GitHub repos that are not intended to be deployed. These special repos are instead intended to be resources for applications. By specifying a remote inside a service endpoint directory the GitHub repo will be automatically checked out and synced in the /var/data/$service_name directory.

Headquarters

The Starphleet Headquarters is a Git repo with all the master configurations for the entire phleet of servers. It will contain the deploy commands for every service, keys for ssh console access to the ships, ssl certificates for the domain(s) you assign to the ship, configuration settings for authentication for each service as well as how the machine can reach LDAP servers, and any other major configuration item.

Aside from special directories, each directory in the headquarters is assumed to be a remote or service

Reserved Files and Dirs

The Headquarters has the followin special directories and files outlined below:

Name Type Description
/hq/.starphleet file The .starphleet file is in the root of the headquarters and handles global environment configuration for services and starphleet. This is where you'd override default starphleet configs for the entire phleet. This is also where you'd set global environment settings for every service in an entire phleet.
/hq/authorized_keys dir The authorized_keys folder should container all the openssh public keys for users who want console access to the sships. The users will all login as "admiral" using their associated private key
/hq/beta_groups dir The beta_groups directory contains a list of files containing usernames inside the file. The names of the files are used in the orders to setup beta groups
/hq/ldap_servers dir The ldap_servers folder contains a list of files for all the ldap configurations used by the security system to secure services. The name of the files are used inside the orders files to configure the security for an endpoint.
/hq/overlay dir The overlay directory contains files intended to be dropped on to the file system of each ship. Their is a direct relationship between the file structure of the overlay directory and the file structure of the ship
/hq/ssl dir The ssl directory contains any of the SSL keys associated with the domains pointing to the ships in the phleet.

Service Files

To create a service endpoint you create a new directory off the root of the headquarters. Inside the directory you create an orders file which contains configuration options and deploy commands for the service. This file will tell Starphleet which Git repo to deploy at this endpoint. The following is what the directory structure will look like along with a few of the files that might be associated with your service:

Name Type Description
/hq/$service dir Aside from the reserved files and directories in the headquarters each directory in your headquarters is considered a service endpoint. These directories will correspond to a service that gets exposed via NGINX and inside the directories you will place files that contain all the environment and details for the service.
/hq/service/orders file The orders file contains all the environment variables and Starphleet commandsto get a service or remote deployed.
/hq/service/remote file There are times you may want to get data onto the server but the data is intended to be consumed by services. These are called "remotes". Just like a service, a remote is tracked for changes and automatically updated when the Git repo changes. THe data is stored inside the container in "/var/data/$service". $service corresponds with the service directory the remote file is located in.
/hq/service/on_containerize file The on_containerize script must be checked into the headquarters as executable to be executed by Starphleet. When Starphleet downloads a service it will run through the built-in Heroku buildpacks and try to determine the type of application. Before this process occurs, Starphleet will run the on_containerize script as root. This script can run commands that might help setup the container before the buildpack begins.
/hq/service/after_containerize file The after_containerize script must be checked into the headquarters as executable to be executed by Starphleet. When Starphleet downloads a service it will run through the built-in Heroku buildpacks and try to determine the type of application. After this process occurs, Starphleet will run the after_containerize script as root. This script can run commands that might help setup the container after the buildpack finishes.
/hq/service/$cron_file file $Cron jobs must be checked into the headquarters as executable to be executed by Starphleet. See cron jobs for more information

Orders Commands

There are special Starphleet commands expected to be run in the orders file. Commands are:

Command Description
autodeploy This command is passed a single argument which is a link to a github repo. If the repo is private, the link must be an SSH link and Starphleet must have an SSH key installed with access to the repo. This command will build a container, run your application, and if successfully completed, mount the application to the path corresponding to the location of the orders file.
expose Some services that run in a container need special ports exposed outside of the machine. Expose will open a hole in the local machine's firewall and forward traffic destined for the exposed port directly into the container.
unpublished Starphleet will create a container at the location corresponding with the path to the orders file but will not expose the service for web traffic.

Orders Variables

All variables need to be exported to be available to your application. For instance:

export FOO="BAR" 

Variables are available to your application. Some variables are exposed to your application by Starphleet but can be overridden.

Variable Description
PORT The PORT your service will accept requests on. This is the port NGINX will pass traffic through to your service. This port is not exposed externally of the ship.
$ANY_OTHER Any other environment variable set in the orders file will be passed to the application

Overrides

Most of the configuration in Starphleet is handled by setting environment variables. Starphleet exposes several environment variables to the applications that run in the containers. You can set environment variables for a container in a few ways:

  • Globally Across the Phleet
  • Per-Ship
  • Per-Service

It is also important to note that the above list also represents the order in which the environment is overridden.

Globally

Environment Variables set globally will be available to all services on all ships in the phleet. You can set these global environment files in the HQ in the .starphleet reserved file.

Per-Ship

In some cases it might be necessary to set environment variables that are specific to each ship in a phleet. These variables might be unique to a ship like its location. These environment variables get set by adding files to a special directory on the ship. Any file located in the /etc/starphleet.d directory will be sourced and available to all environments in Starphleet.

Per-Service

The orders file used to deploy a container also acts as the final location to set environment variables and override any defaults set upstream from your service. Common settings in the orders file would include locations to resources, DB credentials, and the authentication mechanism you want Starphleet to use for your service.

Commands

All variables need to be exported to be available to your application. For instance 'export FOO="BAR"'. Variables are available to your application. Some variables are exposed to your application by Starphleet and can be overridden in your headquarters

Command Description
starphleet-attach Pass the name of the service and select which container. Once inside the container you will be running as the ubuntu user which is the same as your service.
starphleet-dns-offline If you make use of AWS Route 53 and point a healtcheck to your ships you will want to reference the path /starphleet/nginx/status. The starphleet-dns-offline command will make the path unavailable which will softly fail your ship in route 53. If your ship is part of a record set in route 53 this will take your machine out of the rotation or cause a failover scenario to other DNS records
starphleet-dns-online If you make use of AWS Route 53 and point a healtcheck to your sships you will want to reference the path /starphleet/nginx/status. The starphleet-dns-online command will make the path available which will make your ship healthy in Route 53. If your ship is part of a record set in route 53 this will bring your machine online in the DNS rotation or heal from a failover.
starphleet-git This command can be substituted for git but will make use of the keys provided to Starphleet when connecting to your git repo.
starphleet-headquarters Without an argument this command will display the headquarters used by Starphleet. Provided an arguement, this command will change the headquarters used by Starphleet.
starphleet-hup-nginx It is not safe to restart NGINX manually. The services responsible for configuring NGINX might be midway through their process and cause NGINX to fail a restart. This command allows you to signal to the NGINX configuration engine built into Starphleet to attempt to reconfigure NGINX and reload the configs.
starphleet-restart-nginx Much like the starphleet-hup-nginx command, this utility triggers the NGINX configuration engine built into Starphleet to attempt to reconfigure NGINX. When this process completes NGINX is restarted instead of reloading. Any services utilizing websockets will lose their connection. Restarting NGINX can only be done manually through this command. Starphleet never restarts NGINX automatically.
starphleet-orphan-reaper When orders vanish from the headquarters, Starphleet will not automatically kill containers that remain running. These containers are orphaned. There are not active orders for the containers but they still exist. starphleet-orphan-reaper will purge any containers without matching orders.
starphleet-redeploy This command accepts the name of a service. This command will completely destroy any existance of the service even if the service is deployed.
starphleet-retry-deploy This command will create fake shas for a service and attempt a deploy of the service. What makes this command unique from starphleet-redeploy is that the current and active container will not be destroyed unless the newer container successfully deployes. If the starphleet-retry-deploy container has an issue the active container will remain online.
starphleet-status Provides a quick overview of the status of a container.

Buildpacks

How Buildpacks Work

Buildpacks autodetect and provision services in containers for you. We would like to give a huge thanks to Heroku for having open buildpacks, and to the open source community for making and extending them. The trick that makes the Starphleet orders file so simple is the use of buildpacks and platform package managers to install dynamic, service specific code, such as rubygems or npm and associated dependencies, that may vary with each push of your service. Note that Starphleet will only deploy one buildpack per Linux container - for services which are written in multiple languages, custom buildpacks may be required.

Starphleet currently includes support for Ruby, Python, NodeJS, and NGINX static buildpacks.

Default Buildpacks

Buildpack Description
Ruby This will run bundle install and make use of your Procfile.
Python TODO: Fill in
NodeJS TODO: Fill In
NGINX Detect an index.html file and service static contact.

Authentication

Starphleet supports four primary mechanisms for authentication.

The authentication security mechanism for Starphleet control how each service is protected by authentication. The default authentication mechanism is htpasswd. The default security mechanism can be overridden using one of the override mechanisms and updating the configuration variables outlined below.

Security Settings

Variable Description
USER_IDENTITY_HEADER This is the name of the header used by NGINX to store the "user" authenticated to the service. The user will be set based on the SECURITY_MODE variable and will be dependent on the security mechanism used to authenticate the service. This header will be passed to the web request sent to the apps running the the containers. The apps can validate which user is making the request without baking their own authentication mechanism.
USER_IDENTITY_COOKIE The name of the cookie assigned the username of an authenticated user. Works the same as USER_IDENTITY_HEADER
SECURITY_MODE This setting determines what authentication "mode" a service is deployed in. This setting is intended to be overridden in the headquarters or in the orders file. This mode can be one of four settings:

All variables need to be exported to be available to your application. For instance

export FOO="BAR" 

Public Configuration

Assigning the SECURITY_MODE setting to public requires no additional configuration. Starphleet will serve the content behind a public endpoint openly.

LDAP Configuration

This security mode requires the following environment variable(s):

Variable Description
LDAP_SERVER The name assigned here corresponds with the name of the file containing the LDAP configuration you want to use to authenticate the service.

Assigning the SECURITY_MODE setting to ldap requires additional configuration in the headquarters. The following is required:

  • Create a ldap_servers folder in your headquarters
  • Create a file with your LDAP settings inside the file. The name of the file will be used by your services to reference this configuration. The file should look like this:
    export LDAP_URL='ldap://guardian-gc.glgresearch.com:3268/dc=glgroup,dc=com?sAMAccountName?sub?(objectCategory=person)(objectClass=User)'
    export LDAP_USER='domain\\sampleServiceAccount'
    export LDAP_PASSWORD='****' 
  • In your orders file, set SECURITY_MODE to ldap. Set LDAP_SERVER to the name of the above file you just created

Your orders file will now have ldap enabled and point to the above configuration on which LDAP server to use for authentication. At this point, your service should be authenticating against LDAP.

JWT Configuration

JWT Authentication works by looking for a specific token when a request is made. If the token is invalid the request silently redirects to a service running on the same ship. The service is responsible for authenicating the user and providing a token. The token can then be used to retry the original request.

Details about utilizing JWT authenication can be found here.

HTPASSWD Configuration

This security mode requires the following environment variable(s):

Variable Description
HTPASSWD The appropriate HTPASSWD string associated with the htpasswd SECURITY_MODE.

Assigning the SECURITY_MODE setting to htpasswd requires an additional environment settings in the headquarters. You can set this globally or per-service. To get the appropriate string you can use the linux htpasswd command like the following:

$ htpasswd -n -b changeusername changepasswd
changeusername:$apr1$O7LcRpBk$Pv17p..kUwbcw5rxM4AEr0 

Copy the resulting string from above into a variable named HTPASSWD.

Access Control Lists

If you enable ldap or htpasswd authentication you can also limit a service to certain users. To enable per-user access to a service add a file to your service endpoint directory with the extension .acl. The file should contain a list of users separated by newlines.

Example (example.acl):

jdoe
jjdoe
jsmith

The above service endpoint would only allow three users access - all others will be prompted to login.

JWT Details

JWT Authentication works by looking for a specific jwt token when a request is received by Starphleet. If the jwt token is invalid the request silently redirects to a service running on the same ship. The service is responsible for authenicating the user and providing a token. The token can then be used to retry the original request.

There are three ways to pass a JWT token with your request:

  • Url Param
  • Cookie
  • Header

When Starphleet receives a valid JWT Token as part of a request to a JWT authenticated service the jwt token is used to create a global cookie. The cookie is used to maintain the "session" of the user. The cookie name can be configured by setting the JWT_COOKIE_NAME configuration variable. It is important that the JWT_AUTH_SITE and service protected by JWT all use the same JWT_COOKIE_NAME.

JWT Environment Settings

This security mode requires the following environment variable(s):

Variable Description
JWT_SECRET This setting is required. This value is used as the secret to sign and verify JWT tokens
JWT_AUTH_SITE This setting is required. The path to a login service that signs JWT tokens.
JWT_COOKIE_NAME This setting is required. The cookie name intended to be used to store the JWT Session Token.
JWT_MAX_TOKEN_AGE_IN_SECONDS This setting is required. This settings is checked against the iat claim in the token. Set globally you can ensure a token extending beyond your max thresshold would be flagged as invalid
JWT_EXPIRATION_IN_SECONDS This setting is required. When Starphleet creates a JWT Token during a valid request the session will last as long as this setting. The session is extended each request by this setting.
JWT_COOKIE_DOMAIN This setting is optional.
  • if not provided, the JWT cookie will only be available to the full domain of the original target destination (e.g., myApp.mysite.com)
  • if it is provided, then the JWT cookie will be scoped accordingly.
  • Be sure to understand how cookie domains are applied. For example, this feature can allow implementers to grant access to peer sub-domains of the original target destination. (e.g., JWT_COOKIE_DOMAIN=mysite.com will allow both myApp.mysite.com and someOtherHost.mysite.com to access the jwt cookie.
JWT_ACCESS_FLAGS

This setting is optional.

  • Used in conjunction with af claim bitmask in JWT token
  • Any matching mask in token and this setting permits entry
  • If af claim exists and flags do not match, request rejected with 403

More details can be found in the JWT Access Flags section.

JWT Url Parameter

One of the ways to pass a JWT token to Starphleet is through a URL Parameter. As an example:

https://starphleet.example.com/theservice/?jwt=$token

Passing a valid JWT token via the URL will take precedent over a token passed as the users session in a cookie. If the JWT Token passed as a URL parameter is valid it will replace the users session JWT token.

When the token is passed via a url parameter the request will be redirected to the same incoming URL but with the JWT token stripped from the URL. Sometimes this behavior is not desired. The caller can disable the redirect behavior by passing ?disablejwtredirect=true in the url in addition to the JWT Token. Including the disable parameter will still accept the URL JWT Token but will not redirect and strip the token out of the URL.

JWT Authorization Header

A client side app can pass a JWT token as part of request through the Authorization header using the Bearer claim. When Starphleet receives a request using this method the response will be different if the JWT token is invalid. Starphleet will respond with a status of 401 instead of a silent redirect to the authorization app. Passing a JWT Token utilizing this method is most appropriate for background API calls. The calling app can appropriately error handle the response.

curl -H "Authorization: Bearer $token" https://starphleet.example.com

Bitmasks

Bitmasks can be used to store boolean type flags in an extremely small amount of space. This makes them an easy candidate for JWT Tokens where space is limited and each character can bloat the token a lot. Additionally, bitmasks are efficient for a computer to compare and manipulate.

This documentation does not intend to explain the entirety of concepts with bitmasks. Below is a brief description of how to quickly make use of them.

Bitmasks To Numbers

Probably the most confusing thing about bitmasks is that bitmasks are represented as integers. It is important to understand how to convert integers into a series of bits if you want to use them as flags. Any single number can represent a whole list of flags.

In order to use bitmasks you must first understand the following table:

26 24 23 22 21 20
Decimal 32 16 8 4 2 1

Using the above table you can see that the right most column starts with 1 and moves left by increases exponents of 2. If I wanted to represent a number in "bits" you would put a "1" in the columns that added up to the number.

Using the table above lets convert the number 6 to a bitmask:

26 24 23 22 21 20
Decimal 32 16 8 4 2 1
Bitmask 0 0 0 1 1 0

We have a bitmask that looks like 000110. Using the chart we can see that in Decimal form this represents 4 + 2 = 6.

Using Bitmasks as Flags

To understand bitmasks as flags lets first propose a scenario. Let's assume that we have different types of users:

  • Plumber
  • Engineer
  • Mechanic

When our auth app creates the JWT Token it will determine what kind of user is logging in. Some users might be both Mechanics AND Engineers. Some users may only fit one role. Now lets assume that we have a number of services running on Starphleet. Some of our services can only be accessed by plumbers. Some our services can be accessed by both Plumbers and Mechanics. Some by all of these.

The first thing we must do is assign bit columns to our roles.

Assinging Meanings To Columns

A bitmask is nothing more than a series of 1's and 0's. An example of a bitmask might look something like 0101. In this example we have 4 bits. Each bit can be 1 for true or 0 for false.

To illustrate how to use a bitmask we are going to refer to our example list of roles above and assign arbitrary meanings to each one of these bits. We have four columns. Starting from the right side:

  • Assign the first column to Plumber (0001)
  • Assign the second column to Engineer (0010)
  • Assign the third column to Mechanic (0100)
  • Assign the fourth column to Future Use (1000)

Given the above example if we wanted to set someone as a Plumber and a Mechanic then the bitmask would look like 0101. If my JWT Authentication App assigned a bitmask of 0111 as the role of my user, I would know that the user was a Plumber, Engineer, and Mechanic

If I wanted to only allow a Plumber and Engineer to use a service then I would set the environment variable JWT_ACCESS_FLAGS in my orders file to 0011.

Bitmasks: Putting It All Together

The final part of understanding bitmasks is understanding that we can't actually assign the bitmask in its pure form. In order to assign bitmasks we convert them to Decimal.

In our example above if we wanted to flag a user as a Plumber and Mechanic we would need to assign them a bitmask of 5 (0101).

If we want to restrict our service to only Engineers, we would set the JWT_ACCESS_FLAGS environment variable to 2 (0010).

JWT Access Flags

To understand how this works it is imperitive that you fully understand how bitmasks work and how the bitwise-and (&) operator works. If these concepts are foreign to you read the bitmask section. Familiarize yourself with the basic principles behind bitmasks and then continue reading.

Starphleet supports an optional feature to granularly control access to services based on FLAGS. This feature requires a configuration at the service level. Additionally, this feature must be implemented in the authentication application that generates your JWT tokens.

To make use of this feature you will set a bitmask in your orders using the Decimal representation. The variable is JWT_ACCESS_FLAGS.

The authentication app must also provide a claim named af.

Example: Access Granted

  • User jdoe logs in and is assigned af ("access flags") of:
    • Integer: 17 (bitmask: 010001)
  • In the orders file for a service running at /app1 the environment variable JWT_ACCESS_FLAGS is set like:
    • Integer: 24 (bitmask: 011000)
  • Flags at bit location 16:Integer: 16 (bitmask: 010000) match:
    • 17 (bitmask: 010001)
    • 24 (bitmask: 011000)
    • ------------------------------ &
    • 16 (bitmask: 010000)
  • User is granted access

Example: Access Denied

  • User jdoe logs in and is assigned af ("access flags") of:
    • Integer: 56 (bitmask: 111000)
  • In the orders file for a service running at /app1 the environment variable JWT_ACCESS_FLAGS is set like:
    • Integer: 03 (bitmask: 000011)
  • No flags match:
    • 56 (bitmask: 111000)
    • 03 (bitmask: 000011)
    • ------------------------------ &
    • 0 (bitmask: 000000)
  • User is denied access

You do not need to understand every part of bitwise operators to make use of flags. For an explanation of how bitmasks work as they pertain to Starphleet, you can read more details in the bitmasks section.

JWT Authentication App

The JWT Authentication app is responsible for assigning the profile and properties for a user into the globally used JWT Token. Before the authentication app performs this function it should validate the user in whatever mechanism is appropriate. The following are special considerations for the service:

  • Requests to the app are silent. It is best to directly forbid direct requests and verify the request is a redirect
  • The app must be using the public security mode
  • The app must provide the login form for all paths and http methods.
  • The app must generate the jwt token using the same JWT_SECRET configured for Starphleet.

Healthchecks

orders service repository can supply a $HEALTHCHECK like:

export HEALTHCHECK='/'

Upon deployment of a service update, Starphleet will issue a GET request to http://{container_ip}:{PORT}/{HEALTHCHECK}, and will expect an HTTP 200 response within 60 seconds. The {PORT} in the preceding URL will have the value specified in your orders.

If you fail the check, the service doesn't deploy.

Cron Jobs

Starphleet lets you specify #@ directives in shell scripts in order to schedule jobs in a container. These with in containers with services, so the most common thing you do is curl yourself:

#!/usr/bin/env bash
#@ * * * * 1

#This is a simple sh-at scheduled job example, it just hits the local
#service -- which is on the container itself and so isn't at /echo
curl http://localhost/on_container

#and you can always hit the ship, in which case you need to use the service
#url /echo
curl http://localship/echo/on_ship 

The #@ directive is just a cron scheduling expression captured inside the script itself.

Development Mode

When Starphleet is installed on your local machine through vagrant the behavior of starphleet changes in a way that facilitates local development. Starphleet becomes your automated build-and-test environment. Starphleet will manage the following tasks for you:

  • Checking out all GIT repos and remote files associated with your headquarters
  • Mapping the above mentioned repos to your machine
  • Automated Container Deployment on file changes (saves) (optional)

Utilizing Starphleet as your build-and-test environment has the benefit of simplifying your workflow. This also allows you to test your code changes against a real Starphleet environment that mimics your production systems.

Installation

Installing the Devmode version of Starphleet doesn't require additional steps. Starphleet will automatically detect it has not been loaded on an Amazon instance and enable Devmode. To get Starphleet installed on your machine you can find installation instructions here.

Configuration

There are several ways you can configure the environment for a local Starphleet deployment:

  • Export your variables manually
  • Create an environment file located at ${HOME}/.starphleet and export the required variables.
  • Utilize web installation scripts (Examples can be found here)
  • Run the appropriate deployment script (ex vmware) and answer the prompts

Required Installation Variables

At a minimum you will need an un-password protected SSH key associated with your Git account for any private repositories associated with your Headquarters. You will need the following environment variables:

export STARPHLEET_HEADQUARTERS="https://github.com/wballard/starphleet.headquarters.git"
export STARPHLEET_PRIVATE_KEY="${HOME}/.ssh/id_rsa"
export STARPHLEET_PUBLIC_KEY="${HOME}/.ssh/id_rsa.pub"

Devmode Vs. Production

Starphleet normally handles the deployment of all services after any change to the Git repos and/or orders envrionment. When running Starphleet locally in development mode the behavior of Starphleet alters a bit. These changes are:

  • Git repos associated with your orders are checked out to a different location
  • Git repos are monitored for file changes rather than git commit changes for triggering deployments (optional)
  • Starphleet 'always' tries to start a dead container.
  • Starphleet uses date stamps instead of git hashes for container names
  • Your working git directory is linked directly to containers

Unbind Dev Dir From Container

The default behavior of Development mode links your working development directory all the way into the virtual containers. This is ideal if your build system typically uses something like gulp watch. You can run gulp watch and your changes will be realized immediately all the way inside the service container.

In some instances this behavior may be unfavorable. A few examples may be:

  • Your service doesn't use a build system
  • The build-pack for your service runs a 'rebuild' against your development directory and changes many files
  • You experience stability issues with HGFS in vmware

In these instances you may wish to unbind your development directory from the container. By adding a setting to your orders file you can change the behavior of Starphleet to instead detect changes as you save them in your development directory and run a full re-deploy of your containers automatically. To trigger this behavior you can add the following to your 'orders' inside your headquarters.

export DEVMODE_UNBIND_GIT_DIR="true" 

Authentication

Some services depend on authentication. Starphleet automatically supports LDAP and htpasswd authentication. Services are written in such a way that they depend on HTTP headers that are provided by the authentication method. If you need to simulate authenticated services in development mode you can add the following variable to your orders file:

export DEVMODE_FORCE_AUTH="[username]" 

Disable Dev Mode

There may be instances where you want to force Starphleet to behave like a production environment locally on your machine. In those instances you can simply touch a file to disable Development Mode:

$ touch /var/starphleet/live 

Starting Devmode Instance

Starting your vagrant instance after it has been halted requires a few additional steps. Once your linux machine has booted you need to reboot the Starphleet service. Run the following commands to start the vagrant instance:

$ vagrant up
$ vagrant ssh
$ sudo restart starphleet 

Stopping Devmode Instance

After installing your local development environment you may want to stop the local instance of Starphleet. It is important to use Vagrant to manage this process. Specifically, you will want to run the following command:

$ vagrant halt