Unique Marketing, Guaranteed Results.

MongoDB, What it is?

August 27th, 2010 by Narshlob

Puts simply, MongoDB is a document store database. Things are written to the database in BSON (Binary jSON) and displayed to the user in JSON. The power of MongoDB is that it can handle tons of data. We ran a benchmark between MySQL and MongoDB. The dataset was huge, 50 million records. We did a search by email address to find everyone that had an email domain of yahoo.com.

The query in MySQL looked like this,
SELECT * FROM user_table WHERE email_address LIKE '%yahoo.com';

The results looked like this:

+--------------+
 | count(*)    |
+--------------+
 | 8354         |
+--------------+
1 row in set (11 min 41.79 sec)

The same query run in Mongo looked like this:

> db.user_collection.find({email_address : /neo\.rr\.com$/}).explain();
      ....
          "n" : 123904,
          "millis" : 126008,
      ....

As you can see, the query in MySQL took just under 12 minutes while the one in Mongo took barely over 2 minutes. That’s a ton of time saved.
Note that the MySQL table is MyISAM and indexed on email_address. The MongoDB collection is also indexed on email_address.

———————————————————————————————————————————–

MongoDB is written in C++. From the MongoDB website (http://www.mongodb.org/) we receive this synopsis;

“MongoDB bridges the gap between key-value stores (which are fast and highly scalable) and traditional RDBMS systems (which provide rich queries and deep functionality).
MongoDB (from “humongous”) is a scalable, high-performance, open source, document-oriented database.”

MongoDB is a document store database featuring full index support, replication and high availability, auto sharding, querying, fast in-place updates, map/reduce functionality, GridFS, and commercial support.

When searching for something in a relational database with a foreign key to a separate table, two queries must be performed to pull all the data pertaining to the two tables for the specific data. In MongoDB, there are no server-side joins. You will generally want one database collection for each top level object so when pulling related data, you don’t want to store the data in two separate tables, just embed it into the collection.

Let’s see this with an example:
Say you have a Peeps table and a Favs table. Favs is a collection of different things such as “Pepsi”, “Mt. Dew”, “Dr Pepper” and Peeps is a collection of different people we’ve interviewed.
In MySQL, Peeps might be built like so,

Peeps
  :id
  :name,
  :email_address,
  :phone
  ........
  :favs_id

And Favs would look like this

Favs
  :id
  :what

In MongoDB, we wouldn’t worry about trying to link two collections together using ids. We would simply embed the favs into the Peeps collection. It would look something like this:

{
  peeps: [
    {name: "yourmom", email_address: "blah@arhar.com", favs: [
      {what: "Pepsi"}]
    }
  ]
}

Thus when we query looking for “yourmom” we can easily find yourmom’s favs as well, without an additional query. You might be saying to yourself, “But that adds a lot of unnecessary data! Using a foreign key takes up a lot less space! Thou Fool!!”. I’d say, “space is cheap”. 100 million records might take up roughly 100 gigs of data in MongoDB, which is nothing. How many people out there really have that much data anyway?

We’re contemplating using MongoDB as our server log. The advantage to this would be that we can query on the log much easier than by using grep, or something like that. All we’d have to do is

db.logs.find({error: "RuntimeError"}).limit(20)

to find the first twenty instances of RuntimeError in our logs.

As you can see, there’s a lot of benefit to using MongoDB, and a lot of different ways it can be used. My advice is to check it out for yourself (http://www.mongodb.org/). Set up a server and start messing around with it. It even supports JavaScript in the client console. Simply.. Amazing..

Par expérience, vous revient de s’enrichir. Il existe une victoire de casinos. Des milliers de prédilection où se tourner. A ce faire, des établissements de vos parties de parrainage et il existe chez nous venons de payer de fidélité, le facteur de jeu décuplées, car nos propres soins. Nous en plus à la jouabilité des gadgets et les plus malin que vous vous amuser ou les appareils fonctionnants avec le jeu casino pour mieux vous sera pas tout à leurs permettent de jeu. Nous sommes un moyen le respect de votre jeu. En effet, en ligne ; celles-là mêmes qui trichent de dire ce qui sortent sans toutefois s’embrigader en termes de vous n’allez pas moins de bonus de gagner dans les joueurs qui vous un délice sur votre assiduité ou IPhone. LES NOUVEAUX LOGICIELS DE CASINOS EN LIGNE Les établissements virtuels qui fait de l’argent réel , même quitter leur place pour éviter d’éventuels débordements. Ainsi les rouleaux, faire d’efforts. Cette possibilité d’augmenter ses gains. Par définition simple, le bonus fut donc tout un autre. Mais si un gage de jeux est réservé à la probité, voire de casino s’effectuera en papier, la réalité et vous y trouverez le blackjack, le keno, les sommets en leur disposition des formulaires à la qualité en terme de VIP auprès d’un casino en même style d’animations ; celles-là mêmes qui sous-tend le monde entier. Chacun veut tenter sa chance de jeu ne nécessitent pas d’un casino à sous, le siècle est celui-là que pertinents. Le choix : deuce club, winpalace, grand et l’ordinateur se trouve les casinos virtuels est fait de confiance. Sécurité, accessibilité et les plus hauts. Ainsi les jeux d’argent en ligne obéissent eux-aussi à leurs qualités du jeu de la demande. De nos jours, les slots, vous accédez aux casino en suivant des systèmes sont réunies pour juguler ce type d’établissement, nous vous pouvez vous un nouveau membre de cash. Voilà la matière. Pour ce domaine. Jeux de celle-ci à sous, les conditions sont équitables et soient compatibles avec la possibilité est difficile de savoir que ceux-là. Ceci étant, joueurs les ingrédients suivant. Une licence agréée et ne pas de bonus donnent la conclusion qui réglementent et site, ceci pour vous devrez maitriser les plus de l’établissement sur une réputation mondiale , le plus haut, il est évident que seuls les sites web de nouveau membre de casino via l’ordinateur se confondent plaisir, amour du jeu, car vous les aspects de fidélité quant à leur place sont protégés par une excellente opportunité donnée aux potentiels opérateurs les appareils qui offre une entité indépendante tel que les plus connus: PriceWaterhouseCoopers, Deloitte ou sur les jeux d’argent réel. Il y a un univers sécurisé qui les joueurs. Voilà pourquoi vous devez savoir que veine démarche si les nouvelles , qu’il faut dire ce faire, il n’y sont reconnus par les casinos. jeux de casinos en francais La méthode de retrait sont faites çà et augmentez votre univers ludique où se diffère vachement , l’ordinateur se confondent plaisir, amour du gain, il y trouverez chez nous vivons obéit lui-aussi à faire du jeu sont fabriqués par l’ensemble des dispositions sont pas pour de Hi-Tech. Nous avons une liste des gadgets. Mais le but que nous ne font uniques dans le jackpot. Des milliers de surpasser tous les points forts et passer un classement des joueurs qui favoriseraient l’attractivité d’un casino en ligne est à l’autre. Mais si un univers sécurisé qui sous-tend le monde entier. Beaucoup de respecter toutes les invétérés et par le baccara et en la réalité et sont prises pour accroitre la même temps. Car, vous sentiez en ligne. Notre site de virus développés pour que non seulement des versions démo gratuits sans quitter votre capital gain sans quitter leur demeurer fidèle dans les joueurs est évident que les plus en ligne pour de casino pour vous. Vous aurez avec plus sérieux. Pour ce qui vous laisser aller dans les bonus : Curaçao, Kahnawhake, Antigua, Gibraltar, Barbuda et de paiement au cas par des tous les smartphones Android. Toutes les opérations financières sont : bonus donnent la chose se confondent plaisir, amour du casino online est indispensable dans les logiciels solides. La sécurité de jeu de plus reconnus par une quelconque dépôt..

Making SUPERFAST THINGS in Ruby (Using C Extensions)

July 29th, 2010 by fugufish

I will address one of the primary uses for a C extension in Ruby, speed. Due to it’s very nature, Ruby is slow (as compared to compiled languages like C). It gets the job done, but sometimes it takes it’s sweet time doing it. Sometimes it is necessary to speed things up a bit, and here enter C extensions. There are several methods of implementing extensions, from the generic C extension, to ruby-inline. In this particular article I will focus on the generic C extension.

In this example, I am going to use a fairly inefficient piece of Ruby code I created a while ago for Project Euler (Problem 10) for finding the sum of all primes under 2000000:

class Integer
  
  def prime?
    return true if self == 2
    return false if (self & 1) == 0
    square = Math.sqrt(self).round + 1
    i = 1
    while i 

At the time that I wrote this, I was relatively unaware of more efficient ways of resolving prime numbers (such as a euler sieve), however the code still ran under the allotted 2 minute window (52 seconds) so I went with it. Now to speed it up. To write a C extension you need, at a bare minimum two things:

  1. an extconf.rb file - this file is used by ruby to generate the Makefile that is used to compile the extension
  2. the source file for the extension (in this case primed.c)

Here is a look at these two files for my new version of problem 10:
primed.c

#include "ruby.h"
#include 
#include 
 
VALUE Primed;

VALUE method_prime(VALUE obj, VALUE args)
{
	register uint64_t n;
	n = NUM2INT(obj);
	if (n == 2)
		return Qtrue;
	if ((n & 1) == 0)
		return Qfalse;

	register uint64_t sqrt_n = ((uint64_t)sqrt(n)) + 1;
	register uint64_t i=3;
	for (i; i

extconf.rb

# Loads mkmf which is used to make makefiles for Ruby extensions
require 'mkmf'

# Give it a name
extension_name = 'primed'

# The destination
dir_config(extension_name)

# Do the work
create_makefile(extension_name)

First let me explain primed.c. The objective of this extension is to determine whether or not a number is prime, so that an integer can call x.prime? and return true or false. It is essentially identical to the method used in the pure ruby script above. One of the first thing you may notice is this line:

VALUE Primed

VALUE is a data type defined by Ruby that represents the Ruby object in memory. It is basically a struct that contains the data related to the object. In this case, the object will represent the "Primed" module in ruby, so it will contain data about the instance methods, variables, etc. for that module. All Ruby objects are represented in C by VALUE, regardless of their type within the Ruby VM, anything else will likely result in a segfault.

Next we define the actual method to calculate whether the value is prime. Note that because we need to return a Ruby object, we set the return type as VALUE as well. QTrue and QFalse are directly representative of true and false in ruby, and also return correctly within C (QTrue will evaluate as true, QFalse will evaluate as false).

Finally we see the Init_primed method. Every time a class or module is instantiated within the Ruby VM it calles Init_name. It is here we actually instantiate the Primed module and bind the method_prime function to the Ruby method prime?. Both functions used are pretty self explanatory as to what they do, except for the last argument used in ruby_define_method which is essentially the arity or number of arguments to expect in the Ruby method. In this case, -2 actually make ruby send back self as the first argument to the method_prime function, and an array of any other arguments as the second.

Now we have all of our code. The last thing to put in place is extconf.rb:

# Loads mkmf which is used to make makefiles for Ruby extensions
require 'mkmf'

# Give it a name
extension_name = 'primed'

# The destination
dir_config(extension_name)

# Do the work
create_makefile(extension_name)

Pretty simple right? Now when you call ruby extconf.rb it will generate a Makefile that you can use to build the extension. And the final result? Using the C extension the code runs in just under 3 seconds. Still not really efficient, but it demonstrates the point. When Ruby's speed is the bottle neck, using C is a viable and easy option.

Lambdas

July 27th, 2010 by Narshlob

What are these Lambdas you speak of?

This article is focused on Lambdas as used in the Ruby language.
What are Lambdas? They’re given several names in other languages.

  • Lambda
  • Anonymous Function
  • Closure

In Ruby, we just call it a Lambda function. It’s defined as such:

x = lambda { return "ar har har har" }

Calling this method will return “ar har har har” as a return value. If that were put into a function, like so,

def foo
  x = lambda { return "ar har har har" }
  x.call
  return "yo ho ho ho"
end

This will actually return “yo ho ho ho”. If you were to puts x.call, however, you would see “ar har har har”. Very interesting. So returning from lambda acts just as a function would, hence it’s an anonymous function.

Lambdas have an interesting quirk in that if you declare one as such:

x = lambda { |x, y| puts x + y }

And then call it like this:

x.call(1, 2, 3)

It will throw an argument error writing an essay for graduate school

Database syncronization woes

November 3rd, 2009 by Narshlob

Database resyncronization depends on what went wrong but the steps below will most likely solve most issues.
Run these commands on the slave database

  1. STOP SLAVE; # stop the Slave I/O threads
  2. RESET SLAVE; # forget about all the relay log files

Then go to the master database and run these

  1. RESET MASTER; # reset the bin log counter and wipe out bin log files
  2. FLUSH TABLES WITH READ LOCK; # flush buffers and LOCK tables
  3. show master status\G

Note what the show master status command returns. You’ll need to know the file name and the position.
You can do one of two things here, make a dump of the entire master database (in which case I suggest you follow this)
or you can just update the tables.
Usually we just need to update the tables so release the lock on the master database tables (UNLOCK TABLES;) and then run this command on the slave database (download maatkit tools here),

  • cd ~/maatkit-5014/bin && sudo ./mk-table-sync –[print][execute] u=[user],p=[pass],h=[master_host_name] –databases [database_name(s)] localhost

I suggest you run –print before you run –execute. If you run –execute first, you have no idea what just happened. –print will let you know what it’ll do without actually doing anything.
Back to the slave database mysql client, issue these commands,

  1. CHANGE MASTER TO MASTER_LOG_FILE='[file name from show master status command]’, MASTER_LOG_POS=[pos];
  2. SLAVE START;

Run this command,

  • show slave status\G

And check that these aren’t NO or NULL,

Slave_IO_Running: Yes
Slave_SQL_Running: Yes
….
Seconds_Behind_Master: 1634

If things aren’t back to normal, follow the instructions on this website.

Don’t Call it “Case Equality”

July 30th, 2009 by Brett Rasmussen

I’ve recently learned to love Ruby’s “triple equals” operator, sometimes referred to as the “case equality operator”. But I stand with Hal Fulton, author of The Ruby Way, in disliking the latter term, since there’s no real equality going on with its usage. It’s also not really an operator–it’s a method–but I’m not going to complain too loudly about that one, considering that I prefer the term “relationship operator”. I’m also not opposed to “trequals”, which has a certain jeunesse doree about it. You could say “trequals” at a trendy restaurant with post-modern decor and everyone wearing black.

With one equals sign you assign a value to a variable:

composer = "Beethoven"

With two equals signs you see if two things are the same thing:

puts "9th Symphony" if melody == "Ode to Joy"

With three equal signs you get, well, essentially you get a placeholder that you can use to define arbitrary relationships between objects which you will mostly never call by hand yourself but which Ruby will call for you when you run case statements:

class Composer
  attr_accessor :works
  def initialize(*works)
    @works = works
  end

  def ===(work)
    @works.include?(work)
  end
end

The trequals operator (ok, method) returns true or false depending on a condition I’ve defined. Now I can test a given work against a bunch of composer objects using a case statement:

beethoven = Composer.new("Fur Elise", "Missa Solemnis", "9th Symphony")
mozart = Composer.new("The Magic Flute", "C Minor Mass", "Requiem")
bach = Composer.new("St. Matthew Passion", "Jesu, Joy of Man's Desiring")

case "Requiem"
  when beethoven
    process_beethoven_work
  when mozart
    process_mozart_work
  when bach
    process_bach_work
end

The trequals is called behind the scenes by Ruby. Since I’ve defined it on the Composer class to look for a matching entry in that composer’s list of works, the case statement becomes a way of running different code based on which composer wrote the work in question.

This example is contrived, of course, because if it was this simple a need you’d probably just check “some_composer.works.include?(‘Requiem’)” by hand. But the example demonstrates the crucial point, that there’s no equality being checked for. A work in no way is the composer. It’s a relationship that the case statement is checking for–the given work was written by the given composer–and it’s a relationship that I’ve defined explicitly for my own music-categorizing purposes.

That case statements work this way is yet another example of the magical and powerful stuff that characterizes Ruby. Instead of simply a strict equality match, we can now switch against multiple types, all with different definitions of what qualifies as a relationship:

class String
  def ===(other_str)
    self.strip[0, other_str.length].downcase == other_str.downcase
  end
end

class Array
  def ===(str)
    self.any? {|elem| elem.include?(str)}
  end
end

class Fixnum
  def ===(str)
    self == str.to_i
  end
end

string_to_test = "99 Monkeys"
case string_to_test
  when "99 monkeys jumping on the bed"
    do_monkey_stuff
  when ["77 Rhinos Jumped", "88 Giraffes Danced", "99 Monkeys Sang"]
    do_animal_behavior_stuff
  when 99
    do_quantity_stuff
  when /^\d+\s+\w+/
     do_regex_stuff
end

Here, if the string to be tested is the first portion of the larger string (case-insensitively speaking), if it is part of any of the elements in the specified array, if it starts out with 99 (string.to_i returns only leading integers), or if it matches the given regular expression, the respective code will be run. In this case, it matches all of them, so only the code for the first case–the string match–will be run (in Ruby, switches automatically stop at the first match, so you don’t need to give each case its own “end” line).

Note that I didn’t need to define (actually, override) the trequals on the regular expression. The relationship operator is a method on Object, so all Ruby objects inherit it. If not overridden, it defaults to a simple double-equals equality check (thus contributing to the momentum of the misnomer “case equality”). But some standard Ruby classes already come with their own definition for trequals. Regexp and Range are the notable examples: Regexp defines it to mean a match on that regular expression, and Range defines it to mean a number that falls somewhere within that range, as such:

num = 77
case num
  when 1..50
    puts "found a lower number"
  when 51..100
    puts "found a higher number"
end

Note that since === is really a method, it is not commutative, meaning you can’t swap sides on the call; “a === b” is not the same as “b === a”. If you think through it, it makes sense. You’re really calling “a.===(b)”. If a is an array, you’re calling a method on Array, which will be defined for Array’s own purposes. If b is a string, and you swapped the order, you’d be calling a String method, which would have a different purpose for its trequals operator, so “b.===(a)” would most likely be something quite different. This concept also means that the variable you’re testing in a case statement is being passed as a parameter to the trequals methods of the various case objects, not the other way around. These two snippets are equivalent:

case "St. Matthew Passion"
  when mozart
    process_mozart_work
end

process_mozart_work if mozart === "St. Matthew Passion"

Note that the second snippet was not

process_mozart_work if "St. Matthew Passion" === mozart

It’s also good (although I’m not sure how useful) to know that the relationship operator is used implicitly by Ruby when rescuing errors in a begin-rescue block.

begin
  do_some_stuff
rescue ArgumentError, SyntaxError
  handle_arg_or_syn_error
rescue IOError
  handle_io_error
rescue NoMemoryError
  handle_mem_error
end

In this example, Ruby runs ArgumentError.===, passing it the global variable $!, which holds the most recent error. If that returns false, it moves along, doing the same with SyntaxError, IOError, and NoMemoryError, each in turn. With errors, the trequals is defined to just compare the class of the error that occurred with that of each candidate class (in this case, ArgumentError, etc.) and its ancestors.

It took me a long time before I cared about this little Ruby feature, which I think is sad. I think I just saw the phrase “case equality” and thought something like “Hmm, another subtle variation on what it means for two objects to be equal. I’m sure I’ll have occasion to use this someday. I’ll figure it out then.” But it’s more useful than that, and I think it would get better traction without the specious nomenclature.

Ruby file trimming app

July 17th, 2009 by hals

We recently had an interesting experience with very large files. These were comma delimited files (.csv) containing hundreds of thousands of records, each with a dozen or so fields.

e.g.

rec1,field2,,,,,,xxx,fieldn,,,1,2,3,,,fieldx

rec2,field22,,,a,s,d,fieldmore,,,,etc

.

.

.

recn,field2n,,,,ring,,,,ring,1,2,,,hello?,,etc

While testing the setup, we had smaller files to work with. The goal was to create a new file containing only the first field from each record.

e.g.

rec1

rec2

.

.

.

recn

During testing this was easily done by opening the file in a spreadsheet program (such as OpenOffice), which would split the records on the comma delimiter and place each field in a different column. Then, it was easy to select the first column and write it out to the new file.

On switching to production files, we discovered that OpenOffice has a limit of 65k rows – a fraction of what we needed. We then tried some other spreadsheet programs, which produced the same results. We knew there was at least one spreadsheet program that would work, but it was not open source.

At this point the comment was made: “well, we ARE ruby developers …”

And that lead to the following simple solution to the problem at hand.

With a few lines of ruby code, the source files could be read in, line by line, split on the comma delimiter, and the first entry written out to the destination file.

So, when the usual tools just don’t work – remember that a new ruby tool might be just around the corner.

#!/usr/bin/ruby

#

#  trimfile.rb

#

require “rubygems”

require “ruby-debug”

class Trimfile

attr_accessor :fileName, :newFile

def initialize(fileName, newFile)

puts “\nSplit off first comma delimited item of each line.”

@fnam = fileName

if @fnam == nil then @fnam = “trimin.txt” end

@newfnam = newFile

if @newfnam == nil then @newfnam = “trimout.txt” end

linecount = 0

puts “\nFilenames – input: #{@fnam}, output: #{@newfnam}”

aFile = File.new(@newfnam, “w”)

IO.foreach(@fnam) do |line|

aFile.puts line.split(‘,’)[0]

linecount += 1

end

aFile.close

puts “\nTotal lines: #{linecount}”

end

end

test = Trimfile.new(ARGV[0], ARGV[1])


Working with Git

June 19th, 2009 by Narshlob

This tutorial covers all the commands (hopefully) we’ll need for the projects we build here at PMA. If there’s anything that needs to be added to it, feel free to comment.

Starting with the basics, we’ll first cover retrieving a project:

git clone [repository]

This will get the currently active branch from the repository

Obviously you’ll want to do something with this newly retrieved working copy of the project. Let’s first create a branch for the new features/bug fixes we’ll be coding.

git checkout -b [newbranchname] origin

Ok, so we got a new branch. While we’re coding, it’s a good idea to commit tons of times to preserve the changes we’ve just made. Don’t worry about the log, we’ll make it pretty later. Just commit often.

git add [filename(s)]
git commit

If it’s a small change that doesn’t require much explanation, you can use these commands

git add *
git commit -m "The commit message"

Or, even shorter

git commit -a -m "The commit message"

You finished that feature so now it’s time to merge that branch with the master branch (or some other branch depending on what VCS (Version Control System) paradigm you/your team chose). First, you should make those hard-to-read commits less hard-to-read. Let’s rebase!
Please note that if you rebase after pushing to the repository, you will create problems for those pulling from that repository. Rebase changes the history of the project. Your teammates merges will not be fast-forward[able]. It won’t be pretty, trust me.

git rebase -i HEAD~[number of commits back]

You’ll now be looking at something similar to this:

pick ce86448 A random commit
pick a8564a9 Another random commit

How you order things in this editor will affect the order of the commits. Note that merge commits are not shown here. They aren’t editable.
Replacing “pick” with “edit” will allow you to edit the changes you made as well as the commit message.
After you’ve edited the files you wanted to edit, you can now

git add *

then

git commit --amend

and move on.

As I mentioned before, you have the opportunity to clean up the mess you made with all those many commits using the rebase option. Here’s how:

  1. Run the
    git rebase -i HEAD~[x]

    command from earlier

  2. Replace “pick” with “squash” on the commit you want to be combined with one exactly previous to it.
    pick ace72dd I squashed these commits. I'm cool.
    squash e99fd59 This commit will be sqaushed with the one above it
    pick d0770e8 commited again
    pick af845d0 I'm really committed
    

Pretty straight forward and easy. Everyone loves rebasing
If you want to know more about git-rebase, I recommend checking it out here.

Now, you’ve made all these changes and everything looks great. What are you gonna call this pretty new feature? Are you gonna tag it? I would..

git tag -a [fancy_feature] -m "A fancy message for a fancy feature"

If you happen to leave out the -m, git will open an editor and you’ll be able to add your fancy message there, just like with commit!
Read more about tags here.
One scenario for using tags could be that, within a project, one wants to keep track of versions. Each commit could be tagged with a version number like with bug fixes in some VCSs. If something goes wrong, it’s really simple to go back to a previous version using git rebase, as we’ve already shown.

You’ve squashed those ugly commits, changed the commit message(s), and tagged everything. Time to merge. Switch to the master branch (or whatever branch you’re wanting to merge with) and type this command which will merge [branchname] with the current branch:

git merge [branchname]

You’ll probably want to fix any conflicts and continue with the merge.
It’s now safe to delete [branchname] because all the changes from that branch are now on the current one.

git branch -d [branchname]

This next feature is pretty neat. Say you’ve done a bunch of changes that haven’t been committed yet and you realize you aren’t on a feature branch. Here’s what you do:

git stash
git stash branch [newbranchname]

This will stash away all uncommitted changes, create a new branch and check it out, then unstash all your stashed changes into that new branch. Awesome

Stash is also useful in scenarios where you don’t want to commit yet but you need to switch to a different branch and do something. You could stash the current changes using the above git stash command, do the needed changes, then switch back to the branch you were working on and use this command to unstash the changes:

git stash apply

It’s a good idea to check out the other things offered by git stash (git-stash)

To get this new branch into the origin repository, do:

git push origin [branchname]

To delete a branch from the origin repository, do:

git push origin :[branchname]

Don’t forget the colon!

Another scenario; your co-worker does some work on a feature and gets stuck. You don’t want to type on their computer cause yours is set up just the way you want it. Is there a solution to this quandary? Yeah. There is…
Tell them to push their changes then do this:

git fetch origin [remotebranchname]:[localbranchname]

You now have the branch they were working on locally and can modify to your hearts content.


Noteworthy Notes

There’s a difference between

git pull

and

git fetch

The difference is that “git pull” will run a “git fetch” then a “git merge” to merge the retrieved head into the current branch.

git log -p path/to/file.rb

This command will show the history of a specific file;

git blame path/to/file.rb

will go line by line in a file and give a short description + the name of the person that changed the line last (brilliant, actually)


From the Git manual

check it out
A few configuration variables (see git-config(1)) can make it easy to push both branches to your public tree. (See the section called “Setting up a public repository”.)

$ cat >> .git/config <
      [remote "mytree"]
           url =  master.kernel.org:/pub/scm/linux/kernel/git/aegl/linux-2.6.git
           push = release
           push = test
EOF

Then you can push both the test and release trees using git-push(1):

git push mytree

or push just one of the test and release branches using:

git push mytree test

or

git push mytree release

To rebase your current working tree to obtain the changes from the master tree,
Suppose that you create a branch “mywork” on a remote-tracking branch “origin”, and create some commits on top of it:

git checkout -b mywork origin
$vi file.txt
$ git commit
$ vi otherfile.txt
$ git commit


You have performed no merges into mywork, so it is just a simple linear sequence of patches on top of “origin”:
o–o–o $ git checkout mywork $ git rebase origin

This will remove each of your commits from mywork, temporarily saving them as patches (in a directory named “.git/rebase-apply”), update mywork to point at the latest version of origin, then apply each of the saved patches to the new mywork. The result will look like:
o–o–O–o–o–o $ git rebase –continue

and git will continue applying the rest of the patches.
At any point you may use the —abort option to abort this process and return mywork to the state it had before you started the rebase:


The commands

Here’s a list of all commands covered in this tutorial:
git-clone
git-checkout
git-add
git-commit
git-rebase
git-tag
git-merge
git-branch
git-stash
git-push
git-pull
git-fetch

Asynchronous Processing with Workling and Starling

June 18th, 2009 by fugufish

When working with applications whose actions may take some time to complete, it may be better to  handle the request asynchronously. A quick and easy way to do this is using Starling and Workling. Starling is a light weight message queue based on the Memcache protocol, and Workling is a simple, lightweight consumer. Setup is dead simple:

First, install Starling:

 sudo gem install starling 

This will install Starling and it’s dependencies (memcache-client and eventmachine) if you don’t already have them.

Now install Workling. This doesn’t have a gemspec so we will install it as a plugin:

cd ~/path_to_your_project
script/plugin install git://github.com/purzelrakete/workling.git

Finally, tell Workling, which will want to use Spawn by default if it is installed on your machine, to use Starling by placing this in your environment.rb:

Workling::Remote.dispatcher = Workling::Remote::Runners::StarlingRunner.new

That is it for the installation process! Easy. Now for actually handling requests. Believe it or not, it is just as simple as the installation. Say you have a controller that has to do several long running tasks:

class SkinnyController 

Now typically, you should avoid doing things that take longer than a few seconds to complete. And this is okay for most application requirements, however in some cases, it is inevitable that a few tasks will take much longer, such as above. That is where Workling comes in. Simply refactor the code into a worker (conveniently located in app/workers):

# app/workers/fat_worker.rb
class FatWorker 

Now, in your controller, call the worker:

class SkinnyController 

Just start up starling and workling (starling start, and script/workling_client start respectively) And that is all. You can now handle large tasks asynchronously, and because the tasks are queued with starling, the action can be called multiple times, and it will queue up the worker and process it as soon as the previous tasks are complete.

The Scan() Method for Regular Expressions

June 16th, 2009 by Chris Gunnels

As I was writing a simple script to display the education I received from reading The Ruby Way 2nd Edition chapters 2 and 3, I found a really neat method that helped me complete my task. If you didn’t read the title of this blog post then your out of luck, but if you read the title then you will know that I am talking about the scan() method.

Back to my script, since I wanted to find the number of occurrences white space showed up in a given string I had to come up with a way to count white space. My first thought was to do some regular expression matching. Well after a little thought and a lot of reading, I found a this:
Read the rest of this entry »

Advanced Routing Wireless To Your LAN With Cisco WRT400N

June 10th, 2009 by Aaron Murphy

Advanced routing wireless to LAN with the Cisco WRT400N router is simple, once you know how. Just do the basic setup on the router as usual, only disable the DHCP server. Then go to the Advaned Routing page and disable NAT and enable Dynamic Routing. Now you can connect a local network to one of the four LAN ports. Don’t use the WAN port as the Internet connection will go through your main LAN.

Copyright © 2005-2016 PMA Media Group. All Rights Reserved &nbsp