iToto's Blog

A Montreal based full-stack web developer who loves learning and trying out new things. This blog is my attempt to document my work as well as a place to discuss ideas or topics that I find interesting. Feel free to follow me on linked social networks.

Filtering by Tag: github

Fork or Die

For those who use git as their scm, you probably are familiar with the term forking. For those who aren't using git yet, do yourself a favour and learn it, you'll thank me later.

Fork, what?

If you are unfamiliar with the term, all it means is to make a "copy" the repository for your own development. This is very common in the open source community as contributors come and go frequently on projects.


Say you're browsing a project on GitHub. You realize that this project is missing a good encryption function. All you need to do is fork the project, develop your kick-ass encryption function and then submit a Pull Request to the project for the owners to review and merge.

Why should I fork?

When people first come to git from other scm/vcs alternatives, they instinctively think that forking is an un-needed process. This is common from those coming from a centralized scm such as svn or cvs.

There are tons of reasons why you should fork, to name a few:

  • Keep your main repository clean with only main branches containing stable code.
  • Avoid endless merge conflicts from contributors stepping on each other's feet.
  • Contributors feel less intimidated working on their own copies.
  • Setup hooks on main repository for deployment to environments.
  • Added security as you can set tight permissions on the main repository.

As git is a distributed scm, the notion of creating a copy of a repository fits well into it's design. The idea is simple: As a contributor, I work on my changes locally, then push them up to my own private remote. Once I'm done with my feature and feel it's ready to be added to the main project – that all other contributors have access to – I create a Pull Request from my fork to the main project.

What if I'm the only contributor?

As long as your project needs to be deployed to an environment (production/staging/etc.) then you should be forking. The idea is to keep your main repository clean and stable (for your deployments) and use your fork for development. With this, it doesn't matter if you're 1 or 100 contributors.

Sweet! When can I start forking?

There's no better time then now! Head over to GitHub and fork to your heart's content. Once your comfortable with it, be sure to start doing it at your work place. Your colleagues will probably bring you cake!

cake is a lie

Quick update

Hey all. Just wanted to post a quick update on my progress.

After my last post, Amazon had announced that it has now supported Node.js with AWS. I was relieved to see that as I wasn't too thrilled about using a a library that may or may not be maintained in the short term. 

The GitHub project for the SDK is pretty good. I've been able to painlessly upload test files to my S3 storage from my application and have already begun creating my module. To all those that were looking to integrate AWS with their Node.js application, it's now a npm command away.

Aside from API development, I have decided to begin working on a front-end website that will give a preview of the app so that I can start announcing it to the world and see if there is any demand for it. As of now I'm going to be using Twitter's Bootstrap along with LaunchRock for the initial sign-ups for people who want to get into the beta when it is available in 2013. I will post the link to the website once it's ready, so keep an eye out.

Starting with Amazon AWS

So this weekend I started working with Amazon's AWS - more specifically, S3. I will be using S3 with my app as a file storage solution.

My experience with AWS thus far has been stellar. Their automated phone verification system blew my mind at how quick and seamless it was. It just feels like a high-quality service that's done right. As long as the experience continues like this, Amazon can have my money.

That being said, this experience furthers the notion of how important user experience is for a service/application. The better the user feels using your product, the easier it is for them to open their wallets to you. So, my main priority when developing the client application will be simplicity and ease-of-use. I believe that if an app is easy to use and clear to the user from the get-go, it will lead to an overall pleasant user experience.

My next step is to tie in the S3 storage with my Node.js API. Design wise, there are two ways for me to implement this. 

1- Client --> S3:

This method requires the client application to send the file directly to the Amazon S3 server.


  1. Less stress on API (bandwidth, processing, request handling, etc.)


  1. Any processing on the file must be done on the client side
  2. Meta-data still needs to be sent to the API for database entries
  3. Creation of unique identifiers are done from API, so this will result in more requests for a file upload.

2- Client --> API --> S3

This method requires the client application to send the file and all data to the API who then does processing and uploads the file to the Amazon S3 server.


  1. No processing needed on Client side
  2. API can do post processing on all files before they are uploaded to S3
  3. All meta-data can be stored simultaneously to the file upload, to ensure synchronization.


  1. More strenuous on the API (bandwidth, processing, request handling, etc.)


        For my application, I believe the second solution to be the best choice. I'm going to go with that for now and see how much stress the API will undergo with the file uploads. If it really is strenuous, I may be forced to switch to the first solution. Only time will tell.

        Also, for anyone else looking to use Amazon S3 with Node.js, I have found these repositories that you might find helpful: