Gabe Westmaas

Engineering Your Organization

As businesses grow, and their products evolve, the teams building those products evolve as well. As a leader it is important to remember that how you design your organization is just as important as how you design your product and your architecture. Your team is how you win in any space, and organizing to help them execute is absolutely vital.

At Tilt, we recently made an organizational change designed to increase ownership, make communication more efficient, and to improve our ability to prioritize product changes that are best for our customers. Organizational changes can often be hard, and seem difficult to implement, but the alternative of leaving inefficiencies in place can be far worse. Below I’ll explain how our teams evolved so far at Tilt, and how the recent changes have impacted our work so far.

Simple Beginnings

From the very beginning, we built Tilt as a platform, not just a simple web application. From day 1, we had an API application, and a web app that talked to it. We knew we’d want to be able to quickly build more apps (and experiments) on top of our API and leverage it as a solid core for our business logic. Given this, the early team structure was obvious: first it was 1 engineer doing everything. Then it was 1 engineer doing API work and 1 engineer doing web/front-end work.

The teams continued to ‘scale up’ with this structure, and it actually worked pretty well—to a point. Eventually, we ended up with 5 or 6 ‘API’ engineers, and 5 or 6 ‘front-end’ engineers, and it started to become clear that maybe we could do better.

But…the Product!

From a product perspective, whenever we’d have a new feature to build, such as search, the team structure began to seem counter-productive towards those feature goals. As a company, we wanted search, but we just had these front-end and backend teams, each with their own set of priorities driven by needs from several other teams, especially in the case of the backend team. So, depending on each teams backlog and current priorities, we’d have to first figure out if both teams could work on search at the same time, or if we’d need to coordinate the backend team to design + implement first so that by the time the front-end team was ready, the backend was built.

There are obviously a lot of moving parts and complexity with this picture, and lots of places for things to go wrong or become delayed. If the backend team misses the deadline, the front-end team can’t fully dive in just yet. If the front-end team finishes early, they may have to pull something else from their backlog while waiting for the API to be ready.

We have a great team of engineers here at Tilt, so we were still able to deliver a great product despite some of these inefficiencies. For a long time we simply relied upon strong communication and a lot of heroic effort to deliver on as much as possible, even if it meant reworking things as the various teams caught up to each other.

Most recently with search, we experimented with mini-teams made up of engineers from various teams that would come together to deliver a feature – they would have separate standups, and be in constant contact until the feature was delivered. But it still had its issues.

Product Teams

We started thinking that maybe these ‘mini’ project teams should actually be what our ‘normal’ teams looked like. If we had more product-focused teams that were ‘full-stack’ in expertise, those teams could work together to more quickly deliver product features and improvements that fit into their product specialty. Looking around the industry, we saw that other smart folks like Spotify and Others had come to similar conclusions. It took us a little while to come up with the right product ‘buckets’ to structure our teams around, but once we did, we made the change and noticed results right away.

The teams we ended up with are as follows:

Tilters: Focuses on aspects of the product that revolve around individual users, such as friend invites, Facebook login, profile pages, rewards programs, etc. Tilts: Focuses on all aspects of a tilt, such as the tilt page, organizer tools, tilt creation, tilt invites/tagging, etc. Payments: Focuses on our payments + payouts infrastructure, contribution flows, refunds, chargebacks, and generally all things related to money. Notifications: Focuses on managing communications and notifications with our users. This includes things like Push Notifications, Emails, Notification Center, etc.

Takeaways

Ownership. It was incredibly clear to everyone how much ownership was created and improved by the product-focused teams. Almost instantly engineers started changing their thinking about how they could improve the new product verticals that they owned and had increased responsibility for. While we had decent ownership before, this was of a new breed. Engineers are now more engaged in actively improving the user experience and the codebase, and they can do so with more focus than before.

Communication has improved greatly. It’s become much easier for support, product, and design team members to know which engineers to go to when they have issues or questions about different aspects of the product. There is much less ‘hunting’ to find the right person for a specific feature, and having questions bounce around from person to person. It remains to be seen if we’ll face new and different communication issues that we didn’t have before, but none have become readily apparent thus far.

Prioritization is refreshingly easier. With the old teams, prioritization was more based on what the “backend” or “frontend” needed to do to their respective codebases. Often each team would have lots of separate product features on the roadmap, and prioritizing was tricky, and also involved the cross-team coordination mentioned above. With the new product-focused teams, it has become much clearer how each team should prioritize their respective verticals. There’s no contention between user features and tiltfeatures, for example, as they can be prioritized separately amongst their specific teams.

Problem Solved… Right?

The goal of this reorganization was the goal of all reorganizations – to balance out inefficiencies in the engineering team. It was very important that we acknowledge what new inefficiencies we were taking on in order to remove existing ones. Specifically, some widespread changes become more difficult to implement, as engineers must change code that is “owned” by another team. Also, there is a risk of technology and style drift between teams that we aren’t ready for yet. We decided that we would rather rely on our engineers and team leads to watch for this rather than lay down a bunch of policies to try and prevent it.

Conclusion

At Tilt, we have a culture of experimentation. We really do believe that failures are as important as successes, and, more importantly, a fear of failure will limit one’s success. That’s why we were willing to make these organizational changes, but we’re also constantly evaluating what’s working and what’s not. We’ll be watching to make sure we haven’t traded one set of communication hurdles for another. If we have, we’ll have learned something valuable in the process.

While re-orgs aren’t easy, they really don’t have to be that bad. If anything, after seeing the positive results, we feel like we took too long to finally pull the trigger. You can do better by answering these questions about your organization:

  • Do product managers need to talk to several teams to release almost every feature for which they are responsible?
  • Do you find that teams are often waiting for another team to finish a requirement before moving forward, and that this requirement is delayed because of other priorities?
  • Do you see signs of inefficient communication between teams? This can manifest as missed requirements, a misunderstanding of the goals of the feature, etc.
  • Do you see signs that your teams don’t exhibit strong ownership over their code because they know they will soon move on to other unrelated projects/features?

If you think a change may be needed, be willing to move fast, experiment, admit when you fail, and keep getting better. We’ll check back in over the coming months and see how we’re doing.

Want to help us build the world’s best crowdfunding platform? We’re hiring in San Francisco, Austin, and Toronto! Check out our openings.

Al Newkirk

10 Reasons to Never Use Perl, Ever

The title of this article is somewhat troll-bait albeit a sentiment posited by many Perl critics. Perl is an awesome and relevant programming language with a huge repository of modules, expressive syntax, and quirky semantics (which I love) and many other benefits which derive from an uncommon approach towards software development; having said that, the following are ten reason to love or hate the Perl programming language.

1. Expressiveness. It’s Multiple Choice All The Way Down.

The Perl programming language and its community are centered around the TIMTOWTDI philosphy (there’s is more than one way to do it), and it’s multiple-choice all the way down, i.e. Perl is a multi-paradigm and context-sensitive language, with support for compiler directives, and can be configured to require implicit or explicit syntax. A simple Perl 5 script feels like a superset of shell scripting. Enable the strict and warnings pragmas and Perl starts to feel like a dynamic high-level programming language. Leverage any of the many object systems available, e.g. mop, Moo, Moose, et al, and it starts feeling like you’re implementing a structured and tightly coupled architecture. What’s nice is that none of this is forced on you; you opt-in for additional features where desired. Due to the ability to scale/morph Perl into more strict, formal and powerful variants as-needed is one of the main reasons I enjoy developing with it.

2. The Comprehensive Perl Archive Network. The CPAN.

The CPAN is a distributed code repository of Perl modules and, is a historical representation of the Perl community, its standards, ideas, and experience. It’s not uncommon to hear others remark that the CPAN is the sole reason to consider using Perl; and while I disagree I will concede that it’s a powerful incentive. The Perl community has always endeavored to maintain a minimalist core referring to the CPAN to extend application development. The CPAN is a grand social demonstration of opensource collaboration and a one stop shop for searching modules; reading documentation and source-code; reviewing module ratings and smoke test results (provided by a collective of testers and a network of testing server known as cpan-testers); as well as downloading and experimenting with distributions. The collection of modules is vast and updated daily. Looking closely at the Perl community you’ll see a mature operation at work with active and experienced collaborators.

3. Unix Integration and Text Processing.

Perl is a programming language inspired by various others, e.g. C, Awk, Sed, and a handful of Unix command-line tools and as such has support for many Unix OS operations. Many of Perl’s core functions are representations of Unix concepts and tools. Perl also has very powerful text processing support; strings and the regex (regular expression) engine are first-class citizens, i.e. part of the language syntax, not provided by a library, and because of this most Perl programmers are better than average at string manipulation (opinion not fact). Perl is a go-to tool for many system administrators because of the aforementioned benefits and the fact that it ships with most all Linux distributions. Perl is still very-much the glue holding many systems together at a high and low level. OS’ utilities, cron-scripts, cgi-scripts, data-mining tools, console applications, etc.

4. Performance. Damn Fast for an Interpreted Language.

In many benchmarks found around the Internet Perl 5 demonstrates amazing performance and memory utilization. It has proven itself to be the most optimized, mature and stable language amongst its peers. While some people claim it to be the most advanced and popular programming language in the world, we can all concede that it’s clearly a very good choice for getting stuff done quickly. Unfortunately Perl is often misunderstood because of widespread slander misrepresenting its capabilities. The story of Perl is much like a Hollywood drama about a once unattractive youth (Perl) who has grown to possess considerable style, beauty and sophistication. Additionally, there are a few initiatives underway to make Perl even faster (e.g. RPerl, MCE, Coro, or any of the many Perl JVM projects.

5. Object-Systems. Pick Your Poison.

Perl is not an object-oriented language but it does support object-oriented programming by blessing objects into classes. This was not part of Perl’s design originally and has indeed been bolted on circa ‘95 in a manner necessary to support backwards compatibility (which has since become a long-held tradition). I don’t mind this, as mentioned previously, many additional features of Perl need to be added on or enabled and have variants. The Perl fellowship of object-systems is no exception (except that they’re exceptional). The following is a description of just a few object systems available on the CPAN which can be used to structure Perl applications depending on your needs: Mo (minimal), Moo (minimal, does roles/traits), Mouse (complete, does roles/traits, does type-checking), Moose (advanced, does roles/traits, does type-checking-and-declaration, does meta-programming), Moops (experimental, based-on-Perl6, does roles/traits, does type-checking-and-declaration, does meta-programming), Class::Accessor (old-guard, quick-and-dirty construction and accessors), etc

6. Sigils. Annotations are an Acquired Taste.

Either you love em’ or you hate em’. Sigils are a type of annotation embedded in a variable name much like hungarian-notation (an old naming convention) except that sigils are generally enforced. In Perl, a sigil is a variable name prefix using the “*”, “&”, “$”, “@”, or “%” symbol. Many programming languages use sigils to denote type or scope (e.g. Perl, Ruby, PHP, etc) and many do not (e.g. Python, Java, etc). The warrant for sigils is mostly relegated as matter of taste but I find them incredibly useful and miss them when working in languages that don’t have or require them. Additionally, sigils make the use of parentheses optional in many cases. Imagine an empty script in your sigil-less language of choice, it has an instruction that requires a user-defined module on one line and a print-output function followed by a sequence of characters without parentheses on the next line. How would a developer that did not write that code determine whether the sequence of characters is a variable, a function, a literal or a constant? It’s definitely not obvious. Moreover, though arguably a lame thing to want to do, having sigils allows you to create two variables with the same name in the same scope which have different purpose (meaning), so you could say that sigils allow homonyms in code.

7. Parallel Processing. The Future of Computing.

Perl makes easy things easy and hard things possible. Parallel processing is hard even if the language supports it natively. Parallel processing is necessary where large problems can be broken into smaller operations and executed concurrently; I believe parallel processing implementation and scaling patterns will be common knowledge to the developers of the future. Currently, Perl has very well-rounded support for parallel processing via threads (e.g. threads, Coro), forking (e.g. Proclet, Child, Parallel::Prefork), and event-loops (e.g. POE, AnyEvent, Reflex).

8. Regular Expressions. The Little Engine that Could.

The regular-expression engine has always been part of the Perl language and syntax; it has arguably been the source of its popularity. Like Perl, regular-expressions draw the same type of adoration or disdain because of its terse expressiveness, power and flexibility. Many other programming languages provide regular-expressions support using a standard library which has been heavily influenced by Perl’s implementation. The really cool thing is that Perl’s regular-expressions are built into the language’s syntax, i.e. the regex bazooka is always armed and loaded, but why stop there, the CPAN has scores of modules that make using regular-expressions even more awesome (e.g. Regexp::Assemble, Regexp::Common, Regexp::Debugger, etc).

9. Portability. Native on Most OS’.

Did you know that Perl is everywhere? It’s in almost every modern Linux distribution which means it’s being used by major financial institutions and other organizations that require reliable technologies. It’s being used by NASA in planes, rocket-ships, defense systems, consoles, etc. Perl is probably in your OS right now. Perl just might be in your house chatting-up your girlfriend as you’re reading this. Perl is commonly referred to as a glue or duct-tape language because it is a go-to resource for getting $#!+ done quickly (without all the ceremony other languages require). There are ports of Perl for all major operating systems including Mac OS and Windows, and, ofcourse, there is likely to be multiple flavors (e.g. on Windows you have Strawberry Perl and ActiveState Perl).

10. Meta-programming. Hacking Perl with Perl at Runtime.

Meta-programming is the writing, re-writing and introspection of programs at runtime. In typical Perl fashion there is more than one implementation of this functionality (e.g. Moose, Class::MOP, mop). As of this writing, Perl has a very good MOP (meta-object protocol), having said that it is also worth saying that there is an initiative underway to include a more robust default object-system and meta-object protocol in the Perl 5 core, written in Perl, which means that Perl will have reflection, and many of us are very optimistic. If you’re not bursting yet consider this, Perl developers will soon be able to legitimately extend and alter the functionality of objects at runtime without resorting to hacking. Sounds scary, yes, but this is yet another reason that Perl is an amazing tool to have in your toolbox. Additionally, for more on hacking Perl using Perl, see source filtering resources (e.g. Filter::Simple, Devel::Declare, and more) for ways to alter/extend the language’s syntax using Perl.

Final Ramblings.

Programming languages are culture; They are nationalities; They are religions; They are expressions of ideas and philosophies shared by a group of like-minded individuals, … and so, in conclusion, … all programming languages suck, actually, all spoken languages suck as well in that both programming and spoken languages are exclusive and lack precision. Math is the only true universal language. Math is written in Perl, therefore, Perl is the only true universal language. Learn Perl and become a persnickety freedom-loving zealot like me.

footnote: You don’t have to profess hate for one thing in order to express love for another. -Naveed Massjouni (friend and co-worker)

Ali Anari

Lessons Learned Publishing My First CPAN Module

So you want to join the ranks of thousands of other #perl hackers and release something to the community? I just developed my first standalone Perl module at Crowdtilt called WebService::NationBuilder, and the process was actually a lot more straightforward than I thought it would be. Keep reading to find out how simple it really is to write your own module and become a published CPAN (Comprehensive Perl Archive Network) author!

PAUSE then play

Head on over to PAUSE, the Perl Authors Upload SErver and register your very own account using this form. Your account will get its own directory (mine turned out to be authors/id/A/AA/AANARI) where your uploaded distributions will be indexed and rapidly propogated across CPAN and its mirrors thanks to some of the fast rsync stuff the PAUSE workers have developed. Don’t let the old-school feel of the PAUSE site fool you, it’s a serious platform that was created by Andreas Koenig in 1995 and has been closely maintained since. For the ever curious, you can dive into the source GitHub repo here.

My::Module v0.0001

You may want to follow some naming convention to make sure your module fits in with everything else already on CPAN. Mine was simple because WebService is a standard namespace for Perl HTTP client libraries, but you can always ask the good folks over at modules@perl.org if you’re ever unsure. Versioning can be tricky because Perl honors backwards compatibility, so I recommend using your first digit for Major version, two digits denoting your Minor version, and two final digits for your bugfix or incrementing version number: 1.2501 means v1.25.1. Most importantly, don’t switch version number schemes for a published module or your users won’t know in advance the right format to request.

dzil to the rescue

Dist::Zilla automates away common distribution tasks, and we love using it here at Crowdtilt. The dzil website is chock full of useful tutorials, plugins, and other goodies that helped me get up and running in no time. Run dzil setup and enter your PAUSE credentials, and you’ll never have to manually upload your releases again! Now invoke dzil new and do what you do best – write some killer Perl code.

PODs you can trust

Perl’s documentation system is POD (Plain Old Documentation) and it’s easy to get up and running if you’ve ever used any other human-readable documentation format like Markdown or YAML. You can even generate your repo’s README.md from POD using a dzil plugin, but for WebService::NationBuilder I created separate documentation files for now. You should definitely spend the extra few minutes and define PODs so that your beautiful docs show up on CPAN, or in the terminal when one of your users runs perldoc against your module.

D (deploy) day

Run dzil release and your shiny new module is on its way for the world to see! You can view your distribution’s page on MetaCPAN within an hour (usually 30 minutes), as this site updates very quickly from PAUSE. Now you can install your distribution from CPAN using your favorite CPAN client like cpanm, although you might need to wait until your distribution reaches your configured CPAN mirror.

Perl IRL?

Many of the talks at Perl conferences involve authors talking about their work, because after all, who is better qualified to help others use your module than you? For the non-public-speaking-inclined, there’s always the small casual setting of your local Perl user group (you can find one near you on the Perl Mongers website.

Matt Williams

Your First Hire Should Be a Sysadmin

From the very beginning of Crowdtilt, we envisioned an architecture where the only limitation to how fast we deployed code was the speed at which we could actually develop. Our CTO saw the value in hiring a Sysadmin to focus on this objective from day one. That’s where I come in and here’s how I’ve helped us achieve the goal…

Configuration Management

The first order of business was managing our systems at a higher level. It took very little deliberation to decide on Chef as our configuration management (CM) tool. Our prior experience, the massive collection of available application “cookbooks”, and a very active community were enough to convince us that it was the right tool.

The nice thing about integrating with a CM framework from the beginning is being able to tie in each piece of a web-stack as it becomes necessary, instead of having the monolithic task of converting an entire architecture. The very start of our app involved a Postgres database, the web app, and Nginx out front – fairly typical. We were able to utilize community Chef code for Postgres and Nginx and all of our site specific modifications were implemented in our Crowdtilt “cookbook”. We rolled our own Chef resources for app deployment and wallah – running chef-client now deploys code from our git branches. Combined with great plugins like knife-ec2, Chef is now building our Cloud servers and then configuring our software on top of them!

Consistency From Development to Production

To ensure that the developers are developing in the same environment as our Linux servers in AWS, we introduced Vagrant very early on. For anyone unfamiliar to Vagrant, it’s a wrapper for Virtualbox (and others) that allows for integration with CM frameworks like Puppet and Chef. After provisioning a Vagrant VM, the developer can visit an instance of the site on their local workstation and see their code as it will appear in staging and production.

With Chef managing all of our servers and deploying code from our git application branches, we rely on merges to dev to deploy code to staging, and likewise, merging to master deploys our production environment. When combined with code reviews and Jenkins test-suite automations, it really allows the developers to focus on features without getting bogged down in implementation details or manual processes.

We utilize a very similar system for the Chef code itself. We test Chef code against the same Vagrant instance but use a local bare-bones Chef server called chef-zero. This tool has all of the same core functionality of the real Chef server with minimal setup cost of a full-blown Chef server. We write our infrastructure code locally using this chef-zero server and build our local Vagrant boxes against it. Once we’re satisfied with our Chef code we’ll merge to dev and a Jenkins job syncs the dev branch with the Staging Chef server. Rinse and repeat for production/master.

The Payoff

The end result of this system really speaks for itself as we have been able to move incredibly fast! A great example of this was when we decided to move from Ubuntu 10.04 to 12.04. I was able to run a simple set of Chef commands that provisioned the new servers, removed the old servers, and automatically populated the load-balancer with the new server data. All of a sudden we’d replaced 90% of our stack before lunch! This kind of flexibility and speed has allowed us to spend less time doing busy-work and more time innovating!