I have recently setup a master and slave in MySQL.

the main reason I did this is so I could do some queries and light data mining looking for patterns in our datasets. The issue is that the Live database is optimized for inserts and updates, not for views or summaries.

in creating a replicated database, I was able to index my slave differently, and allow it to be better suited for pulling this report views.

I am NO MySQL database expert, but was able to follow the simple directions on the mysql.com website and have this up and running relatively quickly and painlessly.

This is how I did it:

  • I configured my master by adding the following to my.cnf and restarting the mysql instance:
  • server-id=1276024641
    log-bin=mysql-bin
    log-error=mysql-bin.err
    binlog_do_db=<my_target_db_name>
  • the db_name restricts only 1 db to be replicated. I think omitting this line will replicate all the databases on the master to the slave
  • I added a unique, random number as my server-id to my slave machine’s my.cnf file. the line I added looked like this : server-id=1281421047
  • I then restarted the slave mysql instance
  • I then setup the replication user on the master. I used the phpMyAdmin scripts, by clicking on the “Replication” tab, then the “add slave replication user” link. I saved this info
  • because I am running a live database, I needed to locate the master’s bin log coordinates. To do this, you need to ensure no one is changing the data, the manual recommends locking the tables, but on our transaction heavy system, I just closed the db for the next step.
  • I then proceeeded to output a mysqldump: mysqldump -p my_database_name > db_script.sql
  • while this was saving to file, I went into phpmyadmin under replication and clicked “show master status”, which displayed
  • Variable     Value
    File                mysql-bin.000120
    Position       83336594
    Binlog_Do_DB    my_databaswe_name
    Binlog_Ignore_DB
  • Once the data dump was complete, I re-enabled traffic back into the master db.
  • I then copied the .sql file to the slave machine and imported it using: mysql -p my_database_name < db_script.sql
  • Once the db snapshot was imported, I needed to tell the slave to start running the slave process thread, by running the query:
  • CHANGE MASTER TO
    MASTER_HOST=’master_host_name’,
    MASTER_USER=’replication_user_name’,
    MASTER_PASSWORD=’replication_password’,
    MASTER_LOG_FILE=’recorded_log_file_name’,
    MASTER_LOG_POS=recorded_log_position;
  • from there, the slave should keep in sync with the master
Advertisements

We recently started to look into working with the Trial Pay system
TrialPay Referral Program

There is a movement away from the the typical offerwalls into networks that provide “quality” offers. Trial Pay specializes in Shopping and some survey offers. There isn’t any app install, or toolbar download offers.

What really opened our eyes to this provider is that Facebook exclusively ( so far ) uses them for their Facebook Credits system. We are in the early stages of testing. more to come.

I’ve done this a few times setting up MRTG and always get stuck when trying to use custom scripts.

The caveat is when you target a custom script the notation is:

Target[localhost.mem]: `/etc/mrtg/scripts/mem.pl`

or something along those lines, the caveat is that the quotes are not a ‘ but a ` ( the character at the top left of the keyboard)   this took me a little while to figure it out.

The perl script I am using is:

#!/usr/bin/perl

$machine = `/bin/hostname`;
chomp($machine);
$mem = `/usr/bin/free | grep Mem`;
$uptime = `/usr/bin/uptime`;

if ($mem =~ /^Mem:\s*(\d*)\s*(\d*)\s*(\d*)/) {
 $tot = $1;
 $used = $2;
 $free = $3;
}

if ($uptime =~ /up (.*),  \d* users?,/) {
 $up = $1;
}
print "$used\n";
print "$free\n";
print "$up\n";
print "$machine\n";

I took this from another site.. I will probablly write my shell scripts in PHP, they are just as easy , and powerful

I just wrestled with getting the snmp working for monitoring memory and tcp. What is needed is :

apt-get install libnet-snmp-perl libcrypt-des-perl libcrypt-rijndael-perl libdigest-sha1-perl libdigest-hmac-perl

Case : I need to loop over an array in PHP and remove elements based on some condition.

Solution:

print_r($arr);
foreach($arr as $key => $val)
{
  if($val == 'xyz')
  {
    unset($arr[$key]);
  }
}
print_r($arr);

Works great!

So after the f8 2010, Facebook opened up it’s new Apis. I started to tinker with the new Facebook Social Graph API.

Every object in the social graph has a unique ID. You can fetch the data associated with an object by fetching https://graph.facebook.com/ID.

Facebook also updated their policies to allow caching of their data to reduce API hits to their system. Kind of makes sense as people were probably storing data anyways, and reduces their server loads. A new service they now offer are Real-time Graph subscriptions. This service allows you to subscribe to graph objects and in near real-time ( approx 1 min or less ) your calllbacks get pinged when data changes in their graph. It’s pretty cool and I have setup a basic prototype for testing.

To make this work I had to do a few things.

  1. I turned on error_log in my php.ini, so I had a place to capture the callback and follow the real-time updates
  2. I also had to install the pecl_http extension.
  3. pecl install pecl_http
  4. on Ubuntu I needed to do sudo apt-get install libcurl4-openssl-dev first
  5. add extension=http.so to /etc/php5/apache2/php.ini
  6. restart apache
  7. checked phpinfo()

My basic proto is working :), next I will be integrating for use in our game

Scaling with Cassandra

April 7, 2010

http://arin.me/blog/wtf-is-a-supercolumn-cassandra-data-model