Content

ChangeLog ( page 8 )

Changes I make to the website. submit

October 2006

written by owen, Sun, 22nd Oct 2006 at 10:05 pm

profile photos: For those of you like to take pictures of yourselves and flowers, you can now upload more photos to your profile. Up to a total of 20. I'll get comments working on them at some point.

comment replies: just spent the last 3 hours putting in comment replies. Now you can just click the "Reply" link by a comment to say what you think about it. You can't reply to replies though. THat would be crazy.

post a comment or feature request 10

taylor commented: dood your really going at this thing. love the motivation.. prettysoon hi5 is dead owensoft takes ovar. ... read 9 more

Apr - Sept 2006

written by owen, Sun, 24th Sep 2006 at 11:08 pm

I am gonna try my best to impart some of the knowledge I have learnt while using and experimenting with SQL over the years. SQL a way to store and extra data from a RDMS. SQL is rigid because it has a specific purpose. SQL exists to store and output rows of information as quickly as possible - thats it. It does not care what you are doing with the information or how you are using it. All it cares about are keys, indexs, rows and columns. Most relational databases management systems (DBMS) actually care more about rows than it does about columns but that is a whole other story.

I teach by example whenever I can. So I am gonna drop some T-SQL on a particular problem but it should be pretty easy to follow regardless of the DBMS that you use. I am gonna change the query little by little in the hopes of making it faster as I go along. Any SQL query you write should be written with the assumption that one day you will have a billion records and it will take 2 days to run. Because a query you write now is fast but will not be fast forever. So elimating slow downs early is the best way to ensure that your queries are as efficient as they can be.

Lets begin

For the purpose of this article I am going to aggregate a table of payments. The example table "PAYMTBL" has 1 million records in it with 20 columns. The table stores payment records over a period of several years. I needed a bar chart for a presentation so I wrote a query to SUM() the PAYMTBL_AMOUNT and convert the PAYMTBL_TRAN_DATE to a MON-YYYY string, and I do a Count(1) to cound the number of transactions in the month of Jan-2015;


/* query v1 A */
SELECT 
SUM(PAYMTBL_AMOUNT),
TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YY'), COUNT(1)
FROM PAYMTBL 
WHERE 1=1
AND PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2015' AND '31-JAN-2015' 
GROUP BY TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YY')
ORDER BY 2

Now you could stop here if you wish but a skilled SQL writer wants to write the fastest most accurate SQL possible. Ideally the fastest query is no query at all. But since we need the information we need to write the query to get it. Technically you could run the query over night and cache the results to display at a later point but caching is for babies and is a sure sign of weakness.

So we push forward. The thing to note about in SQL is that anything you do besides SELECT 1 from TABLE will make your sql slow down. Any function you call, any grouping or order by slows the query down. Physics, bus speed, ram speed, disk IO, stuff like that. You want the SQL to do as little work as possible. The GROUP BY, TO_CHAR, SUM, COUNT(1) are all giving the SQL engine work to do. You could in theory just do a;


/* query v1 B */
SELECT * FROM PAYMTBL 
WHERE 1=1
AND PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2015' AND '31-JAN-2015' 

But the above code would return a 100 thousand rows (with 20 columns each) that I would have to process in my application code. Do not forget that the data also take up RAM on the computer; 100k could easily be 30 mb of RAM depending on the size of each record. Yes, I would get the rows quickly but I would never be able to process and group them faster than the RDBMS/SQL engine. WHY couldn't I do it faster? The answer is simple. The SQL database is a product of years of research and testing.

Moving on, seeing that I absolutely need the GROUP BY, TO_CHAR, SUM, COUNT(1) features of the SQL database. What else could I optimize? I decided to take the PAYMTBL table out of the main query. WHY? because any functions you have in the main query will cause the SQL engine to do work on all the rows that are in the tables that are on the same level. The less rows that it has to check the faster the query (indexs, and other things help too but less rows is what you really need).

The below query v2 runs faster;


/* query v2 */
SELECT 
        SUM(PAYMTBL_AMOUNT),
        TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YY'), COUNT(1)
FROM (
        select * from PAYMTBL WHERE 1=1
        AND PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2015' AND '31-JAN-2015' 
)
GROUP BY TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YY')

Why does this query run faster? It is simple really, this query is doing less work than the previous. It all comes back to the how many rows the SQL engine has to "touch" with the GROUP BY, TO_CHAR, SUM, COUNT(1). Lets say that only 100k out of the 1 m rows match the condition "WHERE 1=1
AND PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2014' AND '31-JAN-2015' ". By picking up 100k rows first we save the SQL engine from calling the GROUP_BY TO_CHAR() on 1million rows. This is why this second query is faster. Its faster because it is more specific. The more specific you make your SQL the faster it will be.

We could stop there but no, we must go on because today we have a 1 million rows, next year we will have 2 million. Unless you do not care for efficiency?


/* query v3 */
SELECT 
        SUM(PAYMTBL_AMOUNT),
        TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YY'), COUNT(1)
FROM (
        select PAYMTBL_AMOUNT, PAYMTBL_TRAN_DATE from PAYMTBL 
        WHERE 1=1
        AND PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2014' AND '31-JAN-2015' 
)
GROUP BY TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YY')

In query v3 we replace the SELECT * with only the columns we need. The SQL engine is also affected by the amount of columns returned. So since our PAYMTBL table has 20 columns we do not need to bring back all 20 of them. 20 columns times 100k rows is a waste of time and memory space. Remember physics, time and space. Software is affected by time and space despite. So specifying that we need only PAYMTBL_AMOUNT, PAYMTBL_TRAN_DATE actually makes the query run faster. It is not magic if you think about it.


/* query v4 */
SELECT 
        SUM(PAYMTBL_AMOUNT),
        TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YYYY'), COUNT(1)
FROM (
        select PAYMTBL_AMOUNT, PAYMTBL_TRAN_DATE from PAYMTBL 
        WHERE 1=1 AND TRUNC(PAYMTBL_TRAN_DATE) BETWEEN '01-JAN-2015' AND '31-JAN-2015' 

) GROUP BY TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YYYY')

In query v4 I decided to try adding TRUNC(PAYMTBL_TRAN_DATE) to the inner query. The query slows down. WHY? You can look up full table scan. It is often best not to modify the columns in the WHERE clause because changing/converting them makes it harder for the SQL engine to be sure of what the possible values are suppose to be. This goes back to proper DB design for reporting.

Because of the specificness of SQL the engine is able to do certain tricks that are harder to accomplish when you mess around with the data that it is using. It has something to do with Determinism and Predictability. If you are looking for a date which is 01-Jan-2014 the SQL/DB engine can automagically assume that the set of records before 2013 are unecessary, BUT if you convert a date column to a string the SLQ/DB engine will have to use string logic which doesn't have the same level of rigidity. Hence you get a slower query and possibly full table scans.

So I scraped that idea and move on;


/* query v5 */
SELECT 
        SUM(PAYMTBL_AMOUNT),
        MD, 
        COUNT(1)
FROM (
        select PAYMTBL_AMOUNT, PAYMTBL_TRAN_DATE, 
        TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YYYY') as "MD" from PAYMTBL WHERE 1=1
        AND PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2015' AND '31-JAN-2015' 
)
GROUP BY MD

In the Query v5 I realised that I can move the TO_CHAR function into the subquery. This is a an example of a pre-emptive optimization that you can do before you run your queries. I could have added a column with a MON-YYYY value so that I would not have to do it when I run the query. After testing I realised that I ended up with a faster query. Thats another thing with SQL; YOU NEED TO SPEND TIME TESTING on your target RDMS. And you may get different speeds from MYSQL, that you get from ORACLE or MSSQL or Postgres because all these databases optimize in different wants and try to cheat whenever they can. But one thing is sure you cannot cheat physics.

Anyway why is Query v5 faster? The reason it is faster is because the GROUP BY has to do work when it is calling TO_CHAR() by moving it to the inner query we prepare the data beforehand so all the GROUP_BY has to do is group, sum, count and nothing else.

There is also something else to note; TO_CHAR() in a basic SELECT query is faster than a TO_CHAR() that is in SELECT-GROUP_BY. Also TO_CHAR() in a WHERE clause is even faster than both. So the part of the query in which you are calling your function is also important. The WHERE clause is often evaluated before the database starts looking for data while the SELECT and the GROUP_BY are usually operating on data while it is being gathered. Understanding what is happening in the SQL engine is critical to writing big complicated queries. Some RDMS have a EXPLAIN feature that can tell you want the query is doing.


/* query v6 */
SELECT 
        SUM(PAYMTBL_AMOUNT),
        MD, COUNT(1)
FROM (
        select PAYMTBL_AMOUNT, PAYMTBL_TRAN_DATE, 
        TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YYYY') as "MD" from PAYMTBL 
        WHERE 1=1 AND PAYMTBL_TRAN_DATE BETWEEN TO_DATE('01-JAN-2015') AND TO_DATE('31-JAN-2015')
)
GROUP BY MD

This resulted in a slower query x3. I am not sure why but this might be an example of me doing something that the database that I am using doesn't particularly like me doing. It is even worse if you try to TO_DATE(PAYMTBL_TRAN_DATE) like 47 times worse. full table scan converting all the values in the column. You will learn to avoid certain things the more you experiment. It is often not a bug in the RDMS but a feature. Its hard to explain, the RDMS does somethings well by sacrificing other things.


/* query v7 */
SELECT 
        SUM(PAYMTBL_AMOUNT),
        MD, COUNT(1)
FROM (
        select PAYMTBL_AMOUNT, PAYMTBL_TRAN_DATE, 
        TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YYYY') as "MD" from PAYMTBL WHERE 1=1
        AND PAYMTBL_TRAN_DATE > '01-JAN-2015' AND PAYMTBL_TRAN_DATE < '31-JAN-2015' 
)
GROUP BY MD

This resulted in a slower query by x2. The less-than/greater-than signs are slower because they create a problem of Mutual exclusivity. The database has to do more work compared to doing a BETWEEN. A BETWEEN gives the SQL engine a specific hint to what you want while at the same time limiting the range it needs to check hence resulting in a faster query. Using these specific classes make its easier for the SQL engine to know what you want.

Conclusion

A SQL query is basically a small program that you write to tell the database what you want. The more specific you make it, the more it efficient it will be. SQL is not Object Oriented in fact it can be confusing for OOP programmers. The database wants to keep its structure constant so that it can be fast. OOP on the other hand is more concerned with modeling, flexibility and abstraction. Speed is not a concern when you are shifting classes around in a pool. SQL is not returning objects - it is returning data arrays or set - NOT a graph. You can imagine a database as a skycraper with floors while objects are more like a tree with leaves - two different things, with different trade offs.

SQL is not particularly hard to learn. It has a very limited synthax but it is very useful because what it does is so domain specific. You will learn about SQL by using it alot and gaining experience in effective ways of over coming various challenges. Some databases offer different features and have advantages over others. Column databases for example are faster than row databases because they are optimized for processing columns of data and its near impossible to make a row database faster than a column database (I have tried, physics will not have it).

P.S. Things to note (caveats)

  • Your database is not a cache for your object model. Do not let application logic slip into your database design.

  • The more specific you make your SQL, the faster it will be.
  • Some databases, depending on how old they are won't allow certain optimizations like sub-queries.
  • Avoid doing things that the RDMS does not like. Often times you will only find this out in testing.
  • Pre-emptive data optimization is a good thing. If you need to report a piece of data in a certain format it is often better to create a dedicated column for that formate instead of generating it at run time. for ex. TO_CHAR(PAYMTBL_TRAN_DATE,'MON-YYYY') could be a static column.
  • The people who code the SQL database are better programmers than you and are focused on doing a specific thing. Plus they know more about the data than you do, they specialize in storing and retreiving the data. You would have to load and parse all the rows. If you are willing to do that you might as well store it yourself, and make your own database (which is sometimes better as in the case of files, pictures and PDFs).        Do not put big files in the database, it will be a pain for everybody to manage. The benefits are most times short term.
  • When using SQL you should always know how big your table is, how big it is gonna get and how many columns you have. This is key because SQL operates in a confined space; the more data you have, the slower your query will run - its PHYSICS. Phyiscs rules us all and we must respect it. So do not ignore the fact that you might run out of storage or that your SQL will get slow over time. If you are adding 10 records a day, and each record is 10 kb you should know how big your database is gonna be in 2 years and have a plan to deal with it.
  • The database exists to store and extract data. Hence whenever you design a database table you should have in the back of your mind how efficiently the database will be extracted and the nature of the reports that will be generated from said data. You cannot design a database in isolation without thinking about what reports will be needed from the data design. Over Normalization is a waste of time if all your reports are slow and take 2 days to run.
  • The example query is limited to a month so that I could quickly test the effect that small SQL changes have on the data that I am testing. Your own testing of large vs small data sets might yeild different results but in most cases a bad query on a few rows will be exponetially worst on 1 million rows - NEVER BETTER.
  • The WHERE CONDITION PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2015' AND '31-JAN-2015' actually means '01-JAN-2015 12:00 AM' To '31-JAN-2015 12:00AM'. This means that I won't get any data for the day of 31-JAN-2015. A more accurate range would be to use PAYMTBL_TRAN_DATE BETWEEN '01-JAN-2015' AND TO_DATE( '31-JAN-2015' || ' 23:59:59','DD-MON-YYYY HH24:MI:SS'). The fastest range would be BETWEEN '01-JAN-2015' AND '01-FEB-2015' but this method has user interface complications which you will have to figure out on your own.

post a comment or feature request 2

Gods Child commented: the pink is okay. I don't find it annoying. ... read 1 more

New Comment Features

written by owen, Sun, 02nd Apr 2006 at 3:07 pm

I made some minor changes to the comment system;

Quick Edits: After you make a comment you can edit it by clicking the "Edit" link. Works even when you are not logged in.

Redux: You Name / Website / Email address is saved when you first make a comment or answer a question. So if you are in the habit of answering multiple questions you won't have to type that in again. Works especially for those who are not are not logged in.

Color Coding: If you are logged in and you make a comment it will show up in a different color. This however is totally superficial - not deep or penetrating emotionally or intellectually.

Pictures: You can now have the option to add pictures to your comments. Click the options text.

History: If you have made a couple comments you will notice is a link in the "message add successfully" box. Click the history link and you will see all you comment and the comment of people post comments after yours. Makes rebuttals easier to track. Keep the cussing out to a minimum.

post a comment or feature request 13

stunner commented: Test! ... read 12 more

Sitemap Added

written by owen, Wed, 04th May 2005 at 12:35 pm

Instead of studing for my finals I got distracted and started writing a sitemap - "everything and where to find it". But of course it is not really a sitemap per se because nothing is ever really what it seems. Two years ago if somebody suggested putting all internal website links into the database, I would have said that person is insane. You live, you learn and then you go crazy OOP. No classes though - incase your wondering - a table is a object, a page inherits from a website.

But alas I went ahead and did it, all the SymLinks (47) are now stored and validated against a table that I've created called "sitemap". I also use this table to generate the aforementioned sitemap page. I use wildcards and strict types such as integer to validate the each request (sql cached). So if the page you are trying to access a page that is not in the table then you get a 404. Probably will use Regular Expressions in the future if needs be.

I added some extra RSS feeds too. I changed how the archive page looked a couple moons a long back as well. If your interested there is also some source code. I'm thinking of doing that with every page, but I'm uncertain as to what google will do with it. Some pages for example Logbook are ROBOTS=NOINDEX, softporn is for members only. I did a quick update, if you see anything weird email me at owensoft at yahoo dot com.

post a comment or feature request 2

alex commented: explain simlink? ... read 1 more

Glorybox

written by owen, Sun, 11th Jul 2004 at 9:23 pm

owen there is nothing of interest on the site.
you mean to tell me its over three years now and you still have not found any sensible material to post. come on man. by ray (today)


As I read your comment I said to my self "but wait, what is sensible material?". So I prayed then went to google because google knows everything. I did a search for "sensible material" and realised to my amazement that there is no possible way of getting that stuff to fit on a website.
But being as I am, I looked up the definition for sensible just to be sure I wasn't being an . I just took a definition which seemed to be more to the point;

acting with or showing thought and good sense;


Hopefully that's what was you meant but I don't care because nobody I know in jamaica really uses or understands the internet - this is not a newspaper. Well, I have really. The website is a double sided cheese cake filled with cherries, 3 different kinds of cake mix and those multi-colored candy type things that you sprinkle into the dough in order to make the cake look funny when you cut it. It's complicated, simple, useless and informative all at the same time. It tries to be everything, to everybody. Sometimes I'll make a funny post (I think) but as time moves on and they get archived. I am not much of a writer. I change, the website changes with me.
It's never static, it never hopes of one day being rich and famous. It hates banner ads and popup windows. It's best enjoyed with a cup of tea by the fire place or microwave. There is nothing really productive on this website nor do it pretend to have a level of maturity or is the answer to life, the universe, and everything. But at least when you visit the page it loads fast :).

Updates

Mad Bull submited so pictures of Lime Cay, he has a webblog and so does Yamfoot. I went to the Jamaica Consumer Electronics Expo 2004 last staturday and it sucked, sucked big time. Where are the Cuban dancers from last year?

post a comment or feature request 8

yamfoot commented: hush, do what you want..is yours. And it says "weekly writings" so we expect one post per week! just keep that up and you'll be fine. ... read 7 more

Also available as RSS