Dreaming Of Beetles

A Misanthropic Anthropoid With Something to Say

Archive for the ‘Technology’ Category

Firefox.next (Namoroka) Is Coming

Posted by Chris Latko On June - 30 - 2009
Minefield

Minefield

I’ve built a pre-Alpha 1 version of Firefox 3.6 also known as Namoroka (following the park theme). Namoroka is a national park in Madagascar.

As with previous Shiretoko builds I’ve done, I’m using the same .mozconfig file for optimization as well as fiddling with the application defaults for speed. I am no longer changing the User-Agent on these builds as some people are reporting problems when accessing sites such as Facebook that don’t recognize the UA. When are people going to stop browser sniffing?

In this version, the “parse HTML 5″ was off by default. I’ve turned this on. One other thing is that the browser’s name is “Minefield” as this is a pre-Alpha and things are supposed to blow up (so be careful).

You can find the Namoroka build on my downloads page.

Popularity: unranked [?]

Twitter is Dying

Posted by Chris Latko On June - 26 - 2009
Follow Like Crazy

Follow Like Crazy

Update: I have confirmed that Twitter is using MySQL (unless the mentioned upgrades are a move to a different DB).

With all due respect, Michael Jackson may have given Twitter the final knockout punch. I’m sure you guys are sick of hearing of Twitter’s problems, and frankly I’m sick of writing about them. This new failure is worse than any failwhale, this is a failure of concurrency. Here is a brief timeline of what has happened so far:

  1. June 3 – Twitter realizes there is a half-hour lag on follow/unfollow that should be resolved WITHIN THE NEXT HOUR OR SO.
  2. June 18 – Twitter states it is making infrastructure upgrades to fix the follow/unfollow delay OVER THE NEXT 24 HOURS.
  3. June 23 – Twitter again states there is a lag on follow/unfollow and offers additional info on areas affected – device notification changes and favoriting, BUT WE ALREADY KNEW THIS.
  4. June 23 – Twitter announces additional upgrades for the 24th to fix the problem and says the problem will persist UNTIL LATER IN THE DAY TOMORROW.
  5. June 24 – Twitter says the upgrades were successful and that the catch up period will last FOR AT LEAST ANOTHER 6-12 HOURS.
  6. June 25, 11:26 am – A Twitter employee states “We’re still working on the fix and this is currently the top priority of the services team. It’s a pretty extensive code deployment so it is taking some time.”
  7. June 26 – I’m following fewer people today than I was yesterday and my 1,000 FOLLOWING/DAY LIMIT HAS BEEN HIT.

I can’t talk about the Twitter infrastructure, but I’ve seen this problem before in one of my own companies. With MySQL replication across multiple servers and tons of activity going on, it is almost impossible for the slaves to catch up. In addition, each replicator is generating enormous log files in the event replication fails. These log files can quickly fill up a server especially if you don’t know what you’re doing and have MySQL in a tiny /var partition. Once the log file overruns the server, you cease to replicate until the situation is rectified. I suspect this is what the problem is and with each addition of servers (in the above-mentioned upgrades), those log files get nastier and nastier. There is a fix for this replication problem, but it involves taking all systems offline, rsynching from a master (if there is one) and clearing all logs.

Now MJ steps into the picture and blows the infrastructure away. The search sidebar was removed and later re-added, but this just keeps the failwhale at bay and does nothing but compound the follow/unfollow delays. Now we’re at critical mass with this problem as follow/unfollow basically does not work, or works inconsistently at best. This is going to turn people off in droves as the system is not working as expected. The “I just don’t get it” of Twitter has just been amplified. The image on the above-right show a single person that REALLY wants to follow me, each mail highlighted in blue says “This person has just followed you,” sad thing is, after all this effort they still aren’t following me.

With the failwhale, people got upset but realized that there is so much cool stuff going on here I can hang tight until the system is back. It is sort of like the logic behind the beta-invite. This is entirely different, Twitter isn’t acting as expected.

Also, I’m only taking about one aspect of Twitter in this post. There are the search problems which still aren’t fixed despite the update provided in that link. There was the “all posts coming from the web” problem which occurred over a weekend where, apparently, they take a holiday. This may not sound like a big deal, but it was for a lot of developers and even one business had to SHUT DOWN until the problem was fixed. There are many, many other issues that I’m just not going to bring up.

I’m not giving up on Twitter, but you can find me on FriendFeed.

To Twitter’s credit they have been fairly open on their status blog and their employees are pretty active on the mailing lists.

Popularity: unranked [?]

Twitter, We Have A Problem

Posted by Chris Latko On June - 17 - 2009

spitterSpatter, Speet, Spitter, Spam!

Looks like the reCAPTCHA got jacked (or more likely the Mechanical Turk got involved) and the floodgates have opened for the spitters (or whatever you wanna call them). The account shown on the right gained several hundred followers in the time I watched it. Page after page after page of these useless, non-profile-imaged, non-bio’d, non-locationed, never-status-updated freaks.

Back to watching Scotty Got An Office Job or earning another banjo on Hunch. No wait, I’m extremely busy with a crazy deadline. Scratch that.

Off topic: I updated the Intel-Optimized Firefox build. It seems there is an official RC2 that slipped out after the announcement of the RC availability. I’ve built the RC2 version and you can download on the download page.

Popularity: unranked [?]

Evil Is As Evil Does

Posted by Chris Latko On June - 4 - 2009

Don’t Be Evil

MisfitsThe famous motto attributed to Paul Buchheit for a company that seems to no longer have any choice in the matter. Once you go public, you serve new masters, no matter what you write in the prospectus. Google is not above the market. Nobody is.

Over this past week or so, we’ve seen plenty of evil. Enough to really start to scare me. Here are some examples:

1) Wolfram|Alpha
Sergey Brin interned for Wolfram Research and as a friend of Stephen, was able to see the “computational knowledge engine” months before it was released. Soon after Alpha’s launch, we see Google Squared. This looks a little “cobbled-together-over-two-months” to me, no?

2) Microsoft Bing
Bing has a neato feature that allows you to explore your search in more depth with a dynamic left-hand sidebar. This mostly works with the four verticals they are attacking out the gate – Travel, Shopping, Health, Local. Google preempted this with their own left-hand sidebar allowing you to narrow down by media, time, etc. Again, a bit of a hack-job if you ask me – these features pretty much already existed. Microsoft slams this Google feature in their Bing promo video (along with some other zingers).

Strike two against Bing is Google Wave. Announced almost simultaneously. Who got more press?

3) Google Wave
This is a slam against FriendFeed which was created by none other than Paul Buchheit. Actually, it’s a slam against Twitter, Tumblr, Facebook, and a host of others, but I see it most resembling Friendfeed. Why is this evil? Well, it isn’t really…. competition is good and all… but CHROME IS OPTIMIZED FOR WAVE. That is evil.

4) Yahoo Announcements
These were a bit abstract and lame so I don’t blame the press for ignoring them, but let’s see about Google. Google’s “Searchology” happened the day after Yahoo with similar announcements: user intent, microformats, and mobile search. Check out coverage of Google’s event and Yahoo’s event for the fine details.


Popularity: unranked [?]

The #fixreplies Kerfuffle

Posted by Chris Latko On May - 23 - 2009

TwitterUpdate: After almost a week of testing out this “| ” @ reply concept, I heard from a large group of my followers that this is NOT what they want. The consensus seems to be that individual users want to be able to make their own choice as to if they see these replies or not. Makes sense.

Apologies for being slow on the draw, but this is insane. I didn’t really realize what was going on because I was always on the inside looking in – I was in the “power user” 3%, but following some 40k or so people meant that I always saw some @replies flying around. I thought people were taking this update a little too seriously and kind of brushed it off.

Also, going in, I thought the “show all @ replies” meant that I would see BOTH sides of the conversation (which was incorrect). This further clouded my vision on what was going on, plus there were a few blogs that told the wrong story – I guess I was confused. ReadWriteWeb, like always, has a great analysis of this which I did read, but I should have paid more attention.

I thought, well Twitter should just flip the switch to make the default “show all @ replies” for everyone – take the 3% route instead of the 97% route. This would show everyone BOTH sides of the conversation (which is not correct) and everyone would be happy. But if everyone saw both sides of the conversation, people could inject spam into the million follower club and all hell would break loose! (This is when I realized the BOTH sides thing was totally incorrect, which lead me to shoot up in bed and come down here to write this post).

I logged into my test account and looked at my regular account stream, only to find that it was BORING AS HELL! I have a ton of conversation using my regular account and that is what puts context around my tweets. I tweet some lame stuff sometimes and people ask me about it so I do my best to follow up with those users, not realizing no one else is seeing these tweets.

Twitter waffled on the decision a bit and made a totally lame argument for why this was being done (lame technical and lame UX arguments), made a lame attempt to remedy the situation in the short term, and made a lame promise for the future:

Second, we’ve started designing a new feature which will give folks far more control over what they see from the accounts they follow. This will be a per-user setting and it will take a bit longer to put together but not too long and we’re already working on it.

This future “per-user setting” invalidates their previous technical argument (and the UX argument). What is going on?

From here on out, please prepend all @ replies with a pipe “| ” so we aren’t forced to live in Twitter’s fantasy land. This doesn’t seem to work on all platforms, just make sure your post is not “in reply to”.

Popularity: unranked [?]

Pulling Content Out Of OS X Cache.db Files

Posted by Chris Latko On May - 19 - 2009

I’m not sure when, but most likely when Leopard was released, applications started storing their cache files as sqlite databases (usually named Cache.db). For example, Safari has its cache at:

~/Library/Caches/com.apple.Safari/Cache.db

Apps that haven’t caught up yet are still using the less-efficient .cache files. Though not as efficient, these files are easier to access, just toss it onto BBEdit and you can see the contents. Try doing this with a 100+MB .db file and prepare to wait.

There probably is a GUI app to extract data from .db cache files, but I’m too lazy for that. OS X has everything you need already built in so fire up Terminal.app (I’ve been playing with Visor lately) and dig into your cache:

# cd ~/Library/Caches/com.apple.Safari/
# sqlite3 Cache.db

You’ll be in the sqlite interactive mode:

sqlite> select * from cfurl_cache_response;
sqlite> select receiver_data from cfurl_cache_blob_data where entry_ID = [1234];

To output the data to a file use the following:

sqlite> .output test.html
sqlite> select receiver_data from cfurl_cache_blob_data where entry_ID = [1234];
sqlite> .exit

That should do it. Any questions? Leave a comment.

Popularity: unranked [?]

Firefox 3.5 beta 4 (Shiretoko) Intel Optimized Build

Posted by Chris Latko On April - 28 - 2009

Is now on the downloads page. I noticed there is now default support for mozilla geode and gestures aimed at Win7. A couple people have advised me on how to make this build even faster. Once I’m out of my sickened stupor (hope it’s not swine flu), I’ll post an update.

Popularity: unranked [?]

Sign in with Twitter

Posted by Chris Latko On April - 20 - 2009

Recently Twitter updated their API Wiki with a new “Sign in with Twitter” page that explains OAuth in more detail and provides several “Sign in” buttons. This created a big buzz with ReadWriteWeb, TechCrunch, Mashable, and others all calling it a new entrant in the portable ID sector (OpenID, Facebook Connect, Google Friend Connect, etc.). I called BS on this as I saw the authors were premature in their predictions (plus all commenters on these stories).

One author, whom I highly respect, contacted me directly asking what my take on the story was. Here is my response (with slight modifications):

Not sure of your technical level, but I’m going to breeze through this.

There are two fundamental open source credential mechanisms – OpenID and OAuth. Most “single sign on” is based on OpenID or a variant (both Google and Facebook are embracing and extending here). The problem with OpenID is that it is http based and actually requires you to visit the issuing site to supply your credentials. This won’t work for every case, such as mobile apps or basically any non-web app. This is what I refer to as the OpenID dilemma.

With OAuth, the login process is decoupled further. So if you are on a mobile app and attempt to sign in with twitter, the app will tell you to visit twitter.com to complete the process. You visit twitter.com and are presented with a dialogue saying “so and so app is requesting authorization”. At that point you approve or deny. Once approved, the mobile app forever more has the ability to access your twitter account. As far as I know, the first large adopter of this is Flickr. It is sort of ironic that Twitter actually began the OAuth efforts years ago.

In the twitter API, the OAuth calls have been available ever since I started developing my own twitter tools. So I always wondered why OAuth was never forced on third party developers (I think this was just a smart business decision). So now we have thousands of third party twitter apps that request your username/password for use and you have no idea how reliable the apps are or the people behind them.

In an effort to increase OAuth usage, twitter added the “sign in with twitter” buttons (and also gave the OAuth calls more prominent placement on the main API page). There really isn’t anything new here except a few graphics and twitter providing a little more documentation on OAuth. You can see an example of how it actually works at twittermass.com.

So the bottom line is OpenID is used more often as a “single sign on” and OAuth is used as a security measure for API calls. This doesn’t mean OAuth CAN’T be used as for “single sign on”, but I highly doubt that it will.

Twitter is being extremely cautious with their model right now so throwing down the gauntlet of a new “single sign on” really doesn’t make sense. I have no inside information, so I could be totally wrong here.

If you have any insights on this, I would love to hear them.

Popularity: unranked [?]

Firefox 3.1 Intel Optimized Build

Posted by Chris Latko On March - 13 - 2009
Shiretoko

Shiretoko

Update: Shiretoko 3.1b4pre is now available, some new numbers and a slightly updated FAQ.

BeatnikPad has been offering G4/G5/Intel optimized builds of Firefox 3.0.x and earlier for a number of years now and I’ve grown somewhat reliant on them. This has been a great service to the Mac community and I really appreciate all of Neil’s efforts. He is not only timely with the builds, but is very good with user support as you can see in his comments.

I’ve been using WebKit, Minefield, and increasingly Opera as my main browsers for a while now (and Bon Echo (Firefox 2)) and have recently been running Shiretoko (Firefox 3.1) to take advantage of TraceMonkey. But I’ve been longing for an Intel optimized build and haven’t found one, so I’ve made one.

Shiretoko 3.1b3pre had a SunSpider JavaScript Benchmark of 1333 and Shiretoko 3.1b4pre clocks in at 1449. The regex engine is vastly improved, while 3d/access/math took a hit. I think I can optimize further with the browser config, but don’t have time at the moment.

I’ve also made a few adjustments to the default config, namely turning on TraceMonkey and other minor tweaks to eek some additional speed out.

Go To Downloads Page

Mini FAQ

What’s the deal with all these weird names?
Non-official builds cannot use Firefox branding. I guess I could call it something else, but everyone in the dev community knows this particular version as Shiretoko.

Is Shiretoko Japanese for something?
Yes. Since dev builds are named after parks and this one is named after the Shiretoko National Park in northern Japan. (thanks Mike).

Is this going to break my existing Firefox?
No. You just cannot run them simultaneously.

Will my add-ons work?
Maybe. Firebug works and that’s all that matters to me.

Will you be doing nightly builds?
Yes. Since there is the demand for it, I will start nightlies once my current data crunching project is finished (I cannot interrupt this project every night). I expect to have this done by the end of March.

Will you build for different architectures?
No. Intel is where it’s at.

Popularity: unranked [?]

SSH Login Without Password

Posted by Chris Latko On March - 4 - 2009

This is the old public/private SSH key switcharoo that allows clients to log into servers without being challenged by a password. This is one of the least secure of the SSH setups, but still beats ftp security by a long run. Here are the steps:

  1. Make sure you have added the RSA key fingerprint of the server to the client’s “known_hosts” file. This is as easy as attempting to ssh to the server and answering YES to the dialogue. The key will then automatically be registered to the “~/.ssh/known_hosts” file. You don’t even need to successfully SSH to the server at this point to get the key registered. This step can actually be avoided as you will register the key in step 3 when you scp.

  2. Generate the client’s SSH key. Just type

    # ssh-keygen -t rsa

    at the prompt (you want an RSA key type), then just hit enter to accept defaults for everything, including leaving the passphrase empty.

  3. Move the client’s public key – “~/.ssh/id_rsa.pub” to the server. You can do something like this

    # scp ~/.ssh/id_rsa.pub hostname:/Users/clatko/

    Where you put the key on the server at this point is irrelevant.

  4. Add the client’s public key to the server user’s “authorized_keys” file. On the server you can “cat” this key to the existing file by doing

    #cat id_rsa.pub >> .ssh/authorized_keys

    Also, you can add keys across users if you want, but this opens up the ability for abuse (adding a regular users key to root’s authorized_keys file, etc.).

That should do it. If this doesn’t work, you probably have a permissions problem somewhere – SSH is very picky if the wrong permissions exist on the .ssh directory or its contents. .ssh needs 700 and authorized_keys needs 400 (at the very most).

Popularity: unranked [?]

About Me

Interested in all things tech. Apple, iPhone, OSX, Xcode, LAMP, Obj-C, Cappuccino, Atlas, Sproutcore, JavaScript, Ruby, Python, GNU/Linux.

Twitter

    Photos