Page 2 of 5
Social Groups Home page optimised

Date Posted: 12th Dec 2010 at 10:52 PM

I've gone through and optimised the code on the Social Groups Home page to remove the big giant list of public groups, and make it into an AJAX call. I've also cached some objects and generally made the page load faster.
Comments 1
Interesting stuff with caches.

Date Posted: 13th Oct 2010 at 9:29 PM

Note, this post is highly technical. If you aren't interested in server-level optimisations, click away now!

So Membase came out very recently, and since looking at it at work I figured "what the heck" and installed it in place of memcached on static. It's a lot easier to track whats going on, as you can see here:

Lots of pretty, realtime, graphs. But you'll notice that the most hit things are the templates (the things that make up the bits and pieces of the site like the navbar or the header or foot) and stuff like that. So later on I figured, well, I have XCache running (to cache the PHP opcodes), so why not make use of it's variable caching mechanism too?

So, what I did was to adjust the caching technique for the templates so that instead of getting any uncached templates from memcached (which involves a trip via TCP/IP to static, and then to memcached, and then back again), I instead just switched to straight up xcache_* functions. Obviously any uncached templates are grabbed from the database and placed into cache again.

What I noticed was that the actual template fetches from memcache account for fully 66% of all memcache key hits. You can see this from this graph:

You can see that immediately the number of get requests plummeted around the time I copied in the new code. Considering that the actual amount of RAM used by the templates is fairly small (around 700-800kb), these templates account for a large portion of the traffic going around the MTS network.

What does this mean for the site? Well, since the mostly used stuff is now cached locally in memory, it's inherently faster than all the network traffic that would otherwise be there. And since the static box is the one that actually used to handle all those memcached requests, this means less traffic on that box, and thus means faster serving of static images, javascripts, css and so on.
Comments 2
Reset password should now clear cache correctly

Date Posted: 8th Aug 2010 at 12:35 PM

I have updated the resetpassword and lostpw scripts to correctly clear the cache for user accounts that have passwords regenerated.

Additionally, changing your password or email address should clear the cache, allowing you to login immediately with your credentials.

Lastly, the password reset generation method has been made stronger.
Comments 0
MariaDB is now live

Date Posted: 2nd Mar 2010 at 1:53 PM

I've upgraded the database server from mySQL 5.1.41 up to MariaDB 5.1.42. The extra speed and functionality in MariaDB should help us in the future, especially when 5.2 comes out.
Comments 2
Optimising queries... the pitfalls and the good stuff

Date Posted: 25th Feb 2010 at 11:54 AM

Hi All,

As part of my "job" here on MTS, I am basically the guy who looks after the servers, and who makes sure the performance is adequate, and running ok, and so on.

One of the ways I do this is by looking at the mySQL slow query log. Here I can identify which queries are, well, basically slow. I then examine why they are slow and optimise them.

I discovered the other day a quite slow query, which is thus:

SELECT postid, dateline FROM post WHERE threadid = $threadid ORDER BY dateline LIMIT 1;

Now, basically what this query is doing is figuring out the *first* post in a thread. (It's used in Edit Post) The problem is, that in this case, for a thread of, say, a couple thousand posts, it takes ages. The sample case was taking 1 min 54 seconds - to return only 1 row!

There are a couple ways to optimise this. Obviously first we have to think what the query is doing. The "dateline" in the SELECT is not used - so we can drop that. Then we are left with just postid, and there are other ways to get the first post.

Firstly, the database table for the thread already contains a field called "firstpostid" which, incidentally enough, points directly to the first post. In cases where this exists, we can just read this directly from the thread and not have to do any database calls whatsoever. Especially not ones that take almost 2 minutes. This is a win situation.

In the cases where this firstpostid doesn't exist, (which is not often, and usually on brand new threads), we can simply do this:

SELECT MIN(postid) FROM post WHERE threadid=$threadid

This is basically giving up the lowest postid for that particular thread, which corresponds to the first post. This is orders of magnitudes quicker - 0.01 seconds as opposed to 114. Pretty quick huh?

Another optimisation I did was to change the way the Inbox list works. Instead of ordering by the pmtext dateline, I simply order by the actual PM id. It's the same list, but becuase it's not ordering on a secondary tables index but on the primary key for the main table, it's a lot quicker. Again, a win situation.

There's always going to be queries I can optimise, and things I can improve, but examining the query log, understanding the queries, and also the code behind them is key to figuring out how you can speed up your site, and how you can get the best performance but with the same information presented back to the user.
Comments 3
Changelog 21st Feb 2010

Date Posted: 21st Feb 2010 at 2:11 PM

* I have added the jQuery Lazy Load Plugin ( to help with long image threads
* When opening/closing or stickying threads, they should now automatically get the information from the database, not the cache
* The member profile pages now have the EP icons required above the thumbnail
* The tutorials forum list has been altered to add the Sims 3 Tutorials forum
Comments 1
Searching is now partially re-enabled

Date Posted: 11th Feb 2010 at 11:28 PM

I have re-enabled the "Search for all posts by..." and "Search for all Threads by..." functionality of the Search system.
Comments 2
Pageviews and Memcached

Date Posted: 17th Dec 2009 at 11:32 AM

Hi All,

I've been doing a lot of work behind the scenes on the general coding of the site, and figuring out ways of improving the speed of day to day usage.

First off, how do we identify areas to speed? Well, I have a number of ways of doing that. Firstly, there is a script that I can run that tells me, realtime, which pages in general people are accessing, based on the session information. I get a report like this:

forumdisplay.php => 13
register.php => 14
wiki.php => 19
archive => 21
printthread.php => 25
showpost.php => 35
member.php => 57
index.php => 80
download.php (Type : Other download) => 87
download.php (Type 3: Mesh Recolour) => 88
download.php (Type 1: Maxis Recolour) => 116
showthread.php => 126
download.php (Type 2: New Mesh) => 154
download.php (Type 0: Other download) => 204
browse.php => 568
download.php total: => 656
Total: 1659

Here you can see the 2 main pages on the site are clear: download.php and browse.php. As you probably know, download.php is the pages that show you the individual downloads here on the site, and browse.php is the Download Browser that helps you find downloads.

So obviously I know that improving the speed and responsiveness of these 2 pages will nearly 2/3rds of people visiting and using the site.

What I have been working on lately is converting the debugging code I used to use into a combination Firebug/FirePHP system thats activated only for specific people. This allows me, when I browse the site, to see exactly what queries are being run, the memory usage, and so on.

Obviously to get to that point a lot of code has to be modified: the mysql layer has to be changed, and debugging code has to be placed inside the various scripts I am debugging. But the payoffs are worth it: granular debugging with SQL EXPLAINS, timings, and memory usage, for any page on the site.

Armed with this info I then basically do the following: Find a page on the site and reload it multiple times, comparing the information I see in debugging with whats on the page. This lets me see areas that I can optimise for information that doesn't change often.

So, for example, on the pages that show information about the download, we know that things like the EP icons, the meshes used, any recolours of the mesh, and other information like that doesn't actually change very often (unless the creator does it). So this is information we can easily cache.

What I then looked at is the information that gets taken from the database for every user on the site, for every page refresh - namely, that users information. Speeding up the session handling, and limiting the amount of information coming from the database means less queries, and less load on the database server. So I split off the language information to a seperate system, and merged it with the user information afterwards, rather than grabbing it all in one lump. I also cache the user information for 1 day or until it changes.

This is useful becuase the next optimisation is that of limiting how much user information and post information comes from the database. If we look at the example of a thread or download, we have 2 main things: The individual posts, and the users information about who made that post. In a lot of cases, when a user has made multiple posts to a thread (say, the creator comments on some feedback) we have multiple copies of the same information. In normal vBulletin installations, this information is retrieved for every post, even if it's the same as the last post by that user.

So, to speed this up I basically split the page up into 2 - grab the post details, and grab the user details. By grabbing the user details seperately we only get the details that are distinct on that page - 100 posts by the same user only results in 1 users information being retrieved, rather than the same information repeated 100 times. Obviously this is lighter on the database - especially when you use memcached to cache that users information across threads (think of a creator who posted multiple things).

All this results in a much faster and much less database intensive page load - for the average guest visiting a popular thread, we are only talking about 4 or 5 queries, at most.

For the browse pages, I added much more aggresive caching of the results you expect to get back from the database, but there is not nearly as much work done here are on the showthread pages - it's still an area of improvement.

Overall, using memcached and optimising how the information is used, and what gets pulled from the database is key to making the site faster. But how fast? Well, as an example, between December 2nd and December 3rd pageviews jumped from 555,998 to 1,140,027. Since then it's gone up to a minimum of 1.4 million, with max at 1.8. This kind of jump is pretty impressive, but it can be better.

At the end of the day though, a faster site is a better site, and I know there is more work to be done in this area. It is, however, an interesting one, and one that I don't expect many (if any) people reading this to understand. If you do, well, then at least I have an audience, albeit small.
Comments 4
Test Journal Entry

Date Posted: 2nd Oct 2009 at 11:17 PM

This is a test entry.
Comments 2
Changes 20th September 2009 Part 2

Date Posted: 20th Sep 2009 at 1:56 PM

- Member profile pages now have the "Show All" and the individual sections changed so they return all downloads across both games, not just using the default
- The download browser has been modified to allow it to show all downloads for both games when applicable
- The sidebar on the dowload browser now has the option to search "All" games (ie TS2 and TS3)
Comments 0
Page 2 of 5
Users Viewing This Journal: 0 (0 members and 0 guests)
vB Journal Version 1.5.0 Beta 3
vB Journal Copyright Anthony Scudese