{{ content }} diff --git a/about.html b/about.html index e495de8..6c7a0f1 100755 --- a/about.html +++ b/about.html @@ -6,7 +6,7 @@ print_blurb: "résumé" ---
-

About

+

About

I am a software engineer who enjoys pushing the limits of web technology and applying them to @@ -16,7 +16,7 @@ print_blurb: "résumé" improved.

-

Skill Set

+

Skill Set

I am especially familiar with 'cloud' concepts, e-commerce, server administration, and web applications. When @@ -89,7 +89,7 @@ print_blurb: "résumé" -

Experience

+

Experience

Software Developer  ·  August '13 – present, contractor
@@ -251,7 +251,7 @@ print_blurb: "résumé"

-

Open-Source Contributions

+

Open-Source Contributions

  • logstash/logstash – small enhancements
  • @@ -262,7 +262,7 @@ print_blurb: "résumé"
-

Education

+

Education

  • Taylor University – Bachelor of Science (2005 - 2009) – Computer Science, Systems (Business Information Systems)
  • diff --git a/blog/_posts/2013-01-07-secure-git-repositories.md b/blog/_posts/2013-01-07-secure-git-repositories.md index 3c43c0a..2144678 100644 --- a/blog/_posts/2013-01-07-secure-git-repositories.md +++ b/blog/_posts/2013-01-07-secure-git-repositories.md @@ -15,7 +15,7 @@ files through `openssl` for decryption and encryption. The result is `git`'s ind contents in base64. Soon I found [`shadowhand/git-encrypt`][3]. -### Initial Setup +## Initial Setup First, I did a one-time install of `shadowhand/git-encrypt` on my machine: @@ -51,7 +51,7 @@ Now I just have to be sure to securely keep the salt and pass elsewhere for the that, it's ready for me to use like any other `git` repository. -### A Practical Bit +## A Practical Bit Since I won't frequently be setting up this repository, it'd probably be best if I could keep a reminder about what I'll need to do. So I update `.gitattributes` to exclude itself and `README` from encryption: @@ -87,7 +87,7 @@ $ git commit -m 'initial commit' {% endhighlight %} -### Under the Hood +## Under the Hood Originally I was a bit curious and wanted to verify that it's doing what I thought. So I created a simple test file: @@ -128,7 +128,7 @@ Mon Jan 7 15:11:22 MST 2013 {% endhighlight %} -### Summary +## Summary With `gitcrypt` I can work with a repository and enjoy extra security on top of the redundancy and version control that `git` provides. The only difference from my regular repos is I can't really view my files from [github.com][1] (with the diff --git a/blog/_posts/2013-01-14-terminating-gearman-workers-in-php.md b/blog/_posts/2013-01-14-terminating-gearman-workers-in-php.md index 641dd97..5cc8991 100644 --- a/blog/_posts/2013-01-14-terminating-gearman-workers-in-php.md +++ b/blog/_posts/2013-01-14-terminating-gearman-workers-in-php.md @@ -25,7 +25,7 @@ developed solution. So, I took an afternoon to figure things out, with the worki and some of the background below. -### Graceful Termination +## Graceful Termination For the first part, it was simply a matter of handling a `SIGTERM` signal with PHP's [pcntl module][3] and setting a termination flag. The main worker loop could then check the flag every time it finished a job and cleanly exit. The @@ -65,7 +65,7 @@ $ kill -s TERM 25244 {% endraw %}{% endhighlight %} -### Remote Termination +## Remote Termination Sometimes it's easier to remotely terminate workers when they need new code or configuration (and allowing a process manager to restart them). Since Gearman doesn't support sending a job to every single worker, an alternative is to have @@ -94,7 +94,7 @@ $ php queue.php _worker_test1 terminate {% endraw %}{% endhighlight %} -### Batch Remote Termination +## Batch Remote Termination So now I can remotely terminate workers as needed. However, during deploys it's much more common to ask all the workers to restart. Using Gearman's [protocol][4] to find running workers I can distribute the termination job and then wait @@ -134,7 +134,7 @@ $ php terminate.php {% endraw %}{% endhighlight %} -### Summary +## Summary The result is an extra bit of code, but it makes automating tasks, especially around deploys, much easier. This really just demonstrates one method of creating an internal workers API - termination is just one possibility. Other more diff --git a/blog/_posts/2013-01-21-opengrok-cli.md b/blog/_posts/2013-01-21-opengrok-cli.md index 092c24e..857bb91 100644 --- a/blog/_posts/2013-01-21-opengrok-cli.md +++ b/blog/_posts/2013-01-21-opengrok-cli.md @@ -12,7 +12,7 @@ search results from the command line (particularly for automated tasks). Since I command to load and parse results using [symfony/console][3] and [xpath][4]. -### Usage +## Usage It's straightforward to use, just provide the OpenGrok server, project to search, and the query. Mimicking grep, the output format should look familiar: @@ -39,7 +39,7 @@ $ vim $(opengrok-cli --list refs:PHP_MODE_PROCESS_STDIN) {% endhighlight %} -### Open Source +## Open Source I published the code to [dpb587/opengrok-cli][5]. Check the `README`, but it's easy to get started: diff --git a/blog/_posts/2013-01-28-scripting-endicia-to-purchase-postage.md b/blog/_posts/2013-01-28-scripting-endicia-to-purchase-postage.md index 0db3fab..32355cb 100644 --- a/blog/_posts/2013-01-28-scripting-endicia-to-purchase-postage.md +++ b/blog/_posts/2013-01-28-scripting-endicia-to-purchase-postage.md @@ -11,7 +11,7 @@ it, but one annoyance had been to regularly open it up and add postage since it happen to forget, it ends up blocking things until we notice. I finally got around to scripting that, too. -### Scripted +## Scripted In real life, whenever the balance gets too low it throws up an alert and you need to click through a few menus, select a purchase amount, and confirm the selection before the application will continue. Using [System Events][2], it can all @@ -26,7 +26,7 @@ With that step automated, it can be tied in with the `endiciatool` output -- whe automatically kick off the script to buy more postage. -### Summary +## Summary So now that's one less manual step everybody has to worry about, saving some time and hassle. If you happen to be new to [Endicia][3], you should check them out (and use the promotional code 599888). Their software has been a diff --git a/blog/_posts/2013-02-08-automating-backups-to-the-cloud.md b/blog/_posts/2013-02-08-automating-backups-to-the-cloud.md index 3416a8e..3005f2d 100644 --- a/blog/_posts/2013-02-08-automating-backups-to-the-cloud.md +++ b/blog/_posts/2013-02-08-automating-backups-to-the-cloud.md @@ -10,7 +10,7 @@ on maintaining data integrity, security, and availability. One of my current met secure storage and object versioning to ensure backup data can't undesirably be overwritten. -### Encryption Keys +## Encryption Keys For encryption and decryption I'm using asymmetric keys via [`gpg`][1]. This way, any server can generate and encrypt the data, but only administrators who have the private key could actually decrypt the data. Generating the @@ -80,7 +80,7 @@ Command> quit {% endraw %}{% endhighlight %} -### Amazon S3 +## Amazon S3 In my case, I wanted to regularly send the encrypted backups offsite and [S3][2] seemed like a flexible, effective storage place. This involved a couple steps: @@ -131,7 +131,7 @@ policy via the [sample][3] policy builder for a particular backup type. My simpl {% endhighlight %} -### All Together +## All Together Putting everything together, a single command could be used to backup the database, compress, encrypt, and upload: @@ -162,7 +162,7 @@ The only task remaining is creating a cleanup script using the S3 API to monitor delete them as they expire. -### Summary +## Summary While it has a bit of overhead to get things set up properly, using `gpg` makes secure backups trivial and S3 provides the flexible storage strategy to ensure data is safe. diff --git a/blog/_posts/2013-02-19-using-facter-in-ant-scripts.md b/blog/_posts/2013-02-19-using-facter-in-ant-scripts.md index 1ffc534..25cec72 100644 --- a/blog/_posts/2013-02-19-using-facter-in-ant-scripts.md +++ b/blog/_posts/2013-02-19-using-facter-in-ant-scripts.md @@ -9,7 +9,7 @@ After using [puppet][1] for a while I have become use to some of the facts that working with [ant][3] build scripts, I started wishing I didn't have to generate similar facts myself through various `exec` calls. -### One Fact +## One Fact Instead of fragile lookups like... @@ -29,7 +29,7 @@ I can simplify it with... {% endhighlight %} -### In Bulk +## In Bulk Or I can load all facts with... @@ -51,7 +51,7 @@ And reference a fact in my task... {% endhighlight %} -### Summary +## Summary So now it's much easier to reference environment information from property files (via interpolation), make targets more conditional, and, of course, within actual tasks. diff --git a/blog/_posts/2013-03-01-a-generic-storage-interface.md b/blog/_posts/2013-03-01-a-generic-storage-interface.md index d5177b7..4d92236 100644 --- a/blog/_posts/2013-03-01-a-generic-storage-interface.md +++ b/blog/_posts/2013-03-01-a-generic-storage-interface.md @@ -14,7 +14,7 @@ some extensions don't yet support custom wrappers for file access. An alternativ service-oriented approach to keep my application code independent from the storage configuration. -### Interface +## Interface At the core of my design, is the asset storage interface which looks something like: @@ -37,7 +37,7 @@ The storage engine is responsible for generating a reusable token that can be us simply have it generate a UUID as the token, however tokens could have storage-specific meaning. -### Sample Storage Engines +## Sample Storage Engines I've used several base implementations: @@ -63,7 +63,7 @@ And since `CachedStorageEngine` is just another implementation of `StorageEngine interchangeably within the application with performance being the only difference. -### Application Usage +## Application Usage Using dependency injection, each of the storage backends becomes an independent service, configured depending on the application requirements. The application then has no storage-specific calls like `copy`, `file_get_contents`, `fopen`, @@ -95,7 +95,7 @@ Since `retrieve` will always return a [`SplFileInfo`][5] instance, it can be ref (as demonstrated by the `open` call in the example. -### Complicating Things +## Complicating Things The asset storage interface itself is fairly primitive, but it allows for some more complex configurations: @@ -109,7 +109,7 @@ The asset storage interface itself is fairly primitive, but it allows for some m upload time, write it locally and create a job to upload it in the background) -### Summary +## Summary By abstracting storage logic outside of my application code, it makes my life much more easier as a developer and as a systems administrator when trying to manage where files are located and any relocations, as necessary. diff --git a/blog/_posts/2013-03-07-comparing-php-application-definitions.md b/blog/_posts/2013-03-07-comparing-php-application-definitions.md index 10dcda5..6d57437 100644 --- a/blog/_posts/2013-03-07-comparing-php-application-definitions.md +++ b/blog/_posts/2013-03-07-comparing-php-application-definitions.md @@ -22,7 +22,7 @@ more detailed review... symfony/console example -### Usage +## Usage If I were upgrading my application with a [`symfony/Console`][1] dependency from `v2.0.22` to `v2.2.0`, I could generate the diff of definitions with: @@ -42,7 +42,7 @@ Take a look at several other reports using the default stylesheet: * [`zendframework/zf2`][5] (`release-2.0.0` → `release-2.1.3`) -### Behind the Scenes +## Behind the Scenes The logic behind the command looks like: @@ -134,7 +134,7 @@ And after the initial and final commit are compared, the resulting structured di {% endhighlight %} -### Going Further +## Going Further Being able to parse files and have their differences stored in static, semi-agnostic format allows for some interesting usages: @@ -159,7 +159,7 @@ And unlike some of the other tools I ran into, the static representation is not stylesheet to make it human-friendly. This makes the results potentially reusable for multiple different reports. -### Summary +## Summary I've published this work-in-progress code to [dpb587/diff-defn.php][13] in case you want to try it out with your own PHP repositories. It's certainly not a replacement of reading changelogs and understanding what upstream changes are being diff --git a/blog/_posts/2013-04-27-new-website-for-the-loopy-ewe.md b/blog/_posts/2013-04-27-new-website-for-the-loopy-ewe.md index c72fc3d..20de924 100644 --- a/blog/_posts/2013-04-27-new-website-for-the-loopy-ewe.md +++ b/blog/_posts/2013-04-27-new-website-for-the-loopy-ewe.md @@ -9,12 +9,12 @@ I've spent the past several months working on some website changes for [The Loop push many of those frontend changes out. I thought I'd briefly discuss some of those changes here. -### Before and After +## Before and After First off, it's fun to show before and after screenshots of many key areas... -#### Home Page +### Home Page Screenshot: before Screenshot: after @@ -35,7 +35,7 @@ Instead of a simple, almost-non-existant footer on the old site, I took advantag information, social links, payment options, and numerous other credentials that customers can appreciate. -#### Contact Us +### Contact Us Screenshot: before Screenshot: after @@ -44,7 +44,7 @@ Contact information is important for customers. In addition to the information n cleaner page with a new interactive map to help people visually realize where exactly the shop is located. -#### Wonderful Customers +### Wonderful Customers Screenshot: before Screenshot: after @@ -53,7 +53,7 @@ It's always nice to be able to show feedback customers send in. The new site reo readable way, and on separate pages. It's also much simpler to submit a testimonial through the on-screen form. -#### Shop +### Shop Screenshot: before Screenshot: after @@ -99,7 +99,7 @@ properly indexed and searched via [elasticsearch][2]. I'm looking forward to add the site in the future. -#### Help +### Help Screenshot: before Screenshot: after @@ -109,13 +109,13 @@ new site breaks things down into different topics and adds creative pictures to a new inline form where customers can ask for help instead of bothering to open an email client and compose an email. -### New Stuff +## New Stuff Although I disabled a number of things for later release and chatter, it's always fun to include some completely new functionality... -#### Local +### Local Screenshot: web page @@ -124,7 +124,7 @@ publicize some of the local activities that Fort Collins people would be interes customers see how we exist and work in real life to create more of a connection. -#### About +### About Screenshot: web page @@ -132,7 +132,7 @@ Along with a local page, I also wanted a better page for showing our real world connected and understand both who and where they're purchasing from. -#### Shop Attributes +### Shop Attributes Screenshot: web page @@ -141,7 +141,7 @@ way. If somebody is interested in "Fingering Weight" they can easily see all the they need more complicated searches, there's an Advanced Search link at the bottom of each page. -#### Site Feedback +### Site Feedback Screenshot: web page @@ -150,7 +150,7 @@ feedback. Links at the footer of every page include information like what page t authenticated username information, and whatever notes they want to add. -#### humans.txt +### humans.txt Screenshot: web page @@ -158,7 +158,7 @@ Whenever possible, I like discussing and linking to technical resources that I h created the `humans.txt` file to document many of the resources that have helped make the website possible. -### Conclusion +## Conclusion So there's the basic overview about some of the less-technical changes. I'm looking forward to several additional features to rollout over time and help keep things fresh over the next few months. Later blog posts can discuss some of diff --git a/blog/_posts/2013-05-07-embeddable-and-context-aware-web-pages.md b/blog/_posts/2013-05-07-embeddable-and-context-aware-web-pages.md index e688a83..3c8c3b8 100644 --- a/blog/_posts/2013-05-07-embeddable-and-context-aware-web-pages.md +++ b/blog/_posts/2013-05-07-embeddable-and-context-aware-web-pages.md @@ -21,7 +21,7 @@ the main results content is taking advantage of the request design I implemented * any page should be capable of being a self-contained subrequest. -### Steps +## Steps When a subrequest is self-contained, I call it a *subcontext*. These subcontext requests have an additional requirement of being publicly accessible. In the product search, the [results][3] page is publicly routed and all the pagination and @@ -102,7 +102,7 @@ With those simple customizations I no longer have to worry about knowing what pa template subrequests. It also paves the way for some more fancy behavior... -### Adding Some Magic +## Adding Some Magic Since the subcontext pages are publicly accessible, it should be easy to let Ajax reload individual subcontexts without having to reload the whole page. To enable that, I went ahead and configured subcontext requests to always end up in a @@ -139,7 +139,7 @@ Something easily processable with an Ajax request. And since the clicked anchor the new window URL location by using the [HTML5 History API][6]. -### Conclusion +## Conclusion Once I implemented the code snippets for tying all the ideas together, it became much quicker and simpler for me to embed other dynamic controllers within my requests. So far it has been working out quite well and I no longer have to diff --git a/blog/_posts/2013-05-13-structured-data-with-schema-org.md b/blog/_posts/2013-05-13-structured-data-with-schema-org.md index 3b7cb89..9355bd5 100644 --- a/blog/_posts/2013-05-13-structured-data-with-schema-org.md +++ b/blog/_posts/2013-05-13-structured-data-with-schema-org.md @@ -11,7 +11,7 @@ standards and metadata so the content could be programmatically useful. I chose due to its fairly comprehensive data types and broad adoption by search engines. -### Introduction +## Introduction I think the importance of structured data is growing. Not only does it make things easier for search engines to consistently interpret content, it can also help encourage properly designed website architecture. For example, if I @@ -29,7 +29,7 @@ of how robots would interpret data. For example, I could view the [home page][1] was a robot and browse it in a [formatted HTML][11] page where links are rewritten for followup. -### Basic Pages +## Basic Pages Even basic pages can provide some useful structured data. For example, the page describing the [Loopy Groupies][12] doesn't have complicated content, but it still uses the basic [`WebPage`][14] type to identify breadcrumbs, titles, main @@ -57,7 +57,7 @@ Of course it's not limited to [`schema.org`][2] data types. The robot data also [raw JSON][13] structure. -### Products +## Products One of the most useful types in an e-commerce environment is [`SomeProducts`][3]. It lets robots see things like pricing, inventory, availability, company, model, and various product attributes. For example, here's what our @@ -139,7 +139,7 @@ relationships that specific page (marked as a product) has with other product co graph. -### Product Listings +## Product Listings For the main product types, pages also support listings that reference the individual products. The main [Solid Series][17] listing has the following data: @@ -197,7 +197,7 @@ For the main product types, pages also support listings that reference the indiv {% endhighlight %} -### Rationale +## Rationale Nearly all pages on the new [website][1] have at least some structured data present, if only the breadcrumb data. All this markup isn't simply an academic exercise though. For example, [Ravelry][18] supports checking the pricing and diff --git a/blog/_posts/2013-05-16-ti-debug-a-browser-debugger-for-server-code.md b/blog/_posts/2013-05-16-ti-debug-a-browser-debugger-for-server-code.md index 098fe86..4d521ac 100644 --- a/blog/_posts/2013-05-16-ti-debug-a-browser-debugger-for-server-code.md +++ b/blog/_posts/2013-05-16-ti-debug-a-browser-debugger-for-server-code.md @@ -17,7 +17,7 @@ ago when [David][9] from [CityIndex][10] expressed interest in the project. I've in order to finish some of the features, update dependencies, and create a more stable project. -### Functionality +## Functionality If you're familiar with the WebKit developer tools (also found in [Google Chrome][11]), the interface should look extremely familiar. The core of `ti-debug` is written in [node.js][12] and when started up, it creates a simple web @@ -54,7 +54,7 @@ communication can also use `ti-debug`. For example, Python scripts can currently Screenshot: breakpoint exploration -### Workflow +## Workflow One of the ways that `ti-debug` can be run is locally for a single developer, but in the case of DBGp, `ti-debug` can also act as a proxy to support multiple developers, or a combination of developers wanting to use both the browser-based diff --git a/blog/_posts/2013-06-01-search-engine-based-on-structured-data.md b/blog/_posts/2013-06-01-search-engine-based-on-structured-data.md index 23194ba..46ed822 100644 --- a/blog/_posts/2013-06-01-search-engine-based-on-structured-data.md +++ b/blog/_posts/2013-06-01-search-engine-based-on-structured-data.md @@ -13,7 +13,7 @@ I wanted to try to create a simple search engine for our needs which took advant existing open standards. -### Introduction +## Introduction In my mind, there are four basic processes when creating a search engine: @@ -32,13 +32,13 @@ The next two processes are more what I want to focus on here: * **Maintenance** - keeping the documents updated when they are updated or removed. -### Indexing +## Indexing We were already using [elasticsearch][8], so I was hoping to use it for full-text searching as well. I decided to maintain two types in the search index. -#### Discovered Documents (`resource`) +### Discovered Documents (`resource`) The `resource` type has all our indexed URLs and a cache of their contents. Since we're not going to be searching it directly, it's more of a basic key-based storage based on the URL. The mapping looks something like: @@ -95,7 +95,7 @@ By default, if an `Expires` response header isn't provided, I set the `date_expi future. The field is used to find stale documents later on. -#### Parsed Documents (`result`) +### Parsed Documents (`result`) The `result` type has all our indexed URLs which were parsed and found to be useful. The documents contain some structured fields which are generated by the parsing step. The mapping looks like: @@ -212,7 +212,7 @@ For example, this parsed [product model][17] looks like: {% endhighlight %} -#### Searching +### Searching Once some documents are indexed, I can create simple searches with the [`ruflin/Elastica`][11] library: @@ -258,7 +258,7 @@ $query->setHighlight( {% endhighlight %} -### Maintenance +## Maintenance A search engine is no good if it's using outdated or no-longer-existant information. To help keep content up to date, I take two approaches: @@ -275,7 +275,7 @@ In either case, when a URL is discovered to be gone, the records from both `reso URL. -#### Utilities +### Utilities Sometimes there are deploys where specific pages are definitely changing, or when a whole new sitemap is getting registered with new URLs. Instead of waiting for the time-based updates or cron jobs to run, I have these commands @@ -287,7 +287,7 @@ available for scripting: * `search:sitemap-generate` - regenerate all registered sitemaps -### Conclusion +## Conclusion Starting with structured data and elasticsearch makes building a search engine significantly easier. Data and indexing makes it faster to show smarter [search results][16]. Existing standards like [OpenSearch][12] make it easy to extend diff --git a/blog/_posts/2014-01-13-barcoding-inventory-with-qr-codes.md b/blog/_posts/2014-01-13-barcoding-inventory-with-qr-codes.md index 74c646c..47c1cc4 100644 --- a/blog/_posts/2014-01-13-barcoding-inventory-with-qr-codes.md +++ b/blog/_posts/2014-01-13-barcoding-inventory-with-qr-codes.md @@ -11,7 +11,7 @@ inventory scannable at the [shop][1], and I really wanted to do it in a more mea support. -### Barcodes: 1D vs 2D +## Barcodes: 1D vs 2D There are two different kinds of barcodes: 1 dimensional and 2 dimensional. The 1D allows for a purely linear scan of simple, [UPC][2]-like barcodes. While 1D barcodes are extremely commonplace on many products, I dislike them because @@ -37,7 +37,7 @@ many previously-used 2D scanners can be found on [eBay][6] for very reasonable p some of the used ones would quickly turn unreliable after a period of time. -### Mapping URLs to retail "things" +## Mapping URLs to retail "things" While inventory was the primary target of barcoding, I really wanted to barcode most things involved with retail workflows (like order receipts). With that in mind I figured I needed to store three properties: @@ -69,7 +69,7 @@ Further, the QR code can be used with a redirecting short domain for even simple > [`http://tle.io/EyV3chYax`](http://www.theloopyewe.com/io/EyV3chYax) -### Adding More Context +## Adding More Context One of the reasons I wanted to use QR codes was context. Aside from scans now landing on the shop's website, they can be even more context-aware through security roles. For example, if a customer scans the QR code above, they'll end up @@ -88,7 +88,7 @@ view showing which orders were cut on that specific bolt and how much yardage th -### Integrated Context +## Integrated Context At this point, the barcodes were extremely accessible for one-off scans, but I also wanted to integrate the barcodes into specific points of the system. For the computers we're using USB 2D barcode scanners which are capable of acting @@ -162,7 +162,7 @@ following in a browser... -### Conclusion +## Conclusion I feel like the shop is able to better grow both technically and logistically by having used QR codes as opposed to a classic barcode system. A few techy customers have tried the QR codes, but it's not really something we've been diff --git a/blog/_posts/2014-02-28-distributed-docker-containers.md b/blog/_posts/2014-02-28-distributed-docker-containers.md index a51bb20..71d3421 100644 --- a/blog/_posts/2014-02-28-distributed-docker-containers.md +++ b/blog/_posts/2014-02-28-distributed-docker-containers.md @@ -123,7 +123,7 @@ The next step of an idea is to prototype it, and that's where I am today. There working on, but three general topics... -### Service Discovery +## Service Discovery One of the most interesting concepts is service discovery. I wanted containers to be able to connect with each other across multiple hosts and data centers. I've been using DNS for host discovery and, while it works great it doesn't seem @@ -170,7 +170,7 @@ The disco protocol has a few more features (like using a single server for more filtering services by arbitrary tags like availability zones to improve load balancing), but that's the general idea. -### Configuration Files +## Configuration Files I'm using YAML files to describe images and containers. They get compiled to a static version, and then cached based on the image configuration. For example, take a look at this example [scs-wordpress][16] image manifest. It describes the @@ -180,7 +180,7 @@ enumerates all the configuration options which affect how the service will run. will be connected to the world. -### Self-Provisioning +## Self-Provisioning For each of the four dependency/connection types (volumes, service provider, service dependent, network), I'm trying to make them suitable for local development and AWS EC2 deployment. For example: diff --git a/blog/index.html b/blog/index.html index d4772ba..7fabb81 100644 --- a/blog/index.html +++ b/blog/index.html @@ -9,7 +9,7 @@ layout: default
    -

    {{ post.title }}

    +

    {{ post.title }}

    {% if post.description %}
    {{ post.description }}
    {% endif %} diff --git a/include/content/header-simple.html b/include/content/header-simple.html index 0e4c4a8..91552fa 100644 --- a/include/content/header-simple.html +++ b/include/content/header-simple.html @@ -23,7 +23,7 @@
    DPB
    {doctitle}
    page {page} of {topage}
    -

    Danny Berger
    http://dpb587.me

    +
    Danny Berger
    http://dpb587.me
    diff --git a/include/site/default.css b/include/site/default.css index 8aa1a53..12c51e5 100644 --- a/include/site/default.css +++ b/include/site/default.css @@ -98,26 +98,26 @@ small { color: #666666; } -h1 { +div.site-title { font-size: 20px; font-weight: 400; margin: 4px 0 0; } - h1 strong { + div.site-title strong { font-weight: 700; } - h1 strong a { + div.site-title strong a { color: inherit; text-decoration: none; } - h1 small { + div.site-title small { color: #999999; } -h2 { +h1 { color: #333333; font-size: 18px; font-weight: 700; @@ -125,13 +125,13 @@ h2 { padding: 1px 0; } -h2 small { +h1 small { display: block; font-size: 12px; font-weight: normal; } -h3 { +h2 { border-bottom: #DEDEDE solid 1px; color: #393939; font-size: 17px; @@ -139,7 +139,7 @@ h3 { padding: 0 4px 2px; } -h4 { +h3 { border-bottom: #DEDEDE dotted 1px; font-size: 14px; margin: 22px -4px 10px; diff --git a/include/site/print.css b/include/site/print.css index 073da8d..22463de 100644 --- a/include/site/print.css +++ b/include/site/print.css @@ -61,10 +61,10 @@ p, ul, ol, dl { margin: 7px 0; } -h1 { +div.site-title { margin-top: 6px; } -h3 { +h2 { margin-top: 24px; } diff --git a/index.html b/index.html index b0cfe8f..b7a656c 100644 --- a/index.html +++ b/index.html @@ -18,7 +18,7 @@ tags: danny daniel berger dpb587 dbchip2000
    -

    {{ post.title }}

    +

    {{ post.title }}

    {% if post.description %}
    {{ post.description }}
    {% endif %} @@ -29,7 +29,7 @@ tags: danny daniel berger dpb587 dbchip2000 {% if paginator.next_page %}
    -

    more posts →

    +

    more posts →

    See the full list of {{ site.posts.size }} posts.
    diff --git a/projects.html b/projects.html index 5d9d63e..221631a 100755 --- a/projects.html +++ b/projects.html @@ -5,14 +5,14 @@ description: I like tinkering with ideas and seeing where they end up. These are ---
    -

    Projects

    +

    Projects

    I like tinkering with ideas and seeing where they end up. Some of my work ends up on GitHub and these are a few of my personal favorites…

    -

    ti-debug

    +

    ti-debug

    A browser client for debugging server-side code (like PHP) without an IDE. @@ -26,7 +26,7 @@ description: I like tinkering with ideas and seeing where they end up. These are

    ti-debug: For Debugging Server Code in the Browser (2013-05-16)
    -

    PHP Diff Engine

    +

    PHP Diff Engine

    A language-aware diff engine for changes in PHP classes/interfaces/functions over time. @@ -40,7 +40,7 @@ description: I like tinkering with ideas and seeing where they end up. These are

    Comparing PHP Application Definitions (2013-03-07)
    -

    CLI for OpenGrok

    +

    CLI for OpenGrok

    Command line interface (à la grep) for getting results from an OpenGrok server. @@ -55,7 +55,7 @@ description: I like tinkering with ideas and seeing where they end up. These are -

    Contributions

    +

    Contributions

    I would not and could not be the developer I am without the time others have invested in publishing open source