Today I had a sql query joining a bunch of tables that looked to be hanging. I ran it and went to lunch, and when I got back it still wasn’t done. So I killed all my connection threads, restarted the server, tried to find every way I could to kill any errant table locks that were hanging around, but nothing would make it complete. Finally after going through the syntax again I noticed that I left off the condition joining one of the tables, so it was joining 500,000 rows * 100,000 rows and it just hadn’t finished in the hour I left it to work.
I was more likely to have this problem because of the joining syntax I was taught:
from table1, table2
where table1.id = table2.id
As you add more tables the where clause gets longer, and combined with the rest of the query’s constraints it gets easy to lose track of the ones that are necessary to do the joins properly. My boss never has this problem because he uses a different syntax:
inner join table2 on table1.id = table2.id
This way, you can easily see what constraints are joining the tables together and would know if there was something missing. Also, this syntax is more like what you would use for left and right joins, so your queries will be more consistent. So from now on I’m going to use this syntax to try and avoid wasting time figuring out a problem that’s so easily preventable.
I just spent some time figuring out how to handle exceptions using the pysqlite library. I couldn’t find a good description of how it works online, so I’m making one.
In python, all exceptions are classes. This means they can use any of the capabilities that regular python classes have, but generally all they do is define a few instance values in the __init__ method. These values are available in the exception object (the instance of the exception class) when it is caught in a “try … except” block.
If you’re receiving a user defined exception from a foreign module and don’t know how to handle it, I’d recommend first wrapping the offending code in a “try…except” like this:
logfile.write("ERROR: %s %s %s" % (sys.exc_info(), sys.exc_info(), sys.exc_info()))
The function sys.exc_info() returns a tuple containing 1) the type of exception being handled, 2) the associated parameter (usually some helpful descriptor), and 3) a traceback to where the exception occurred.
This hopefully tells you two things: a description of why this happened, and more importantly for my situation, the exception type. When I ran that I got something like this:
ERROR: <class 'pysqlite2.dbapi2.OperationalError'> table something already exists <traceback object at 0xb7da3cd4>
So the exception being raised was at pysqlite2.dbapi2.OperationalError. Since the python dbapi returns sql errors through exceptions I can’t just fix the code here, I have to “try … except” it. Here’s what the final code snippet looked like:
except pysqlite2.dbapi2.OperationalError, msg:
logfile.write("ERROR: %s" % msg)
To catch the other exceptions specified in the dbapi v2.0, you’d have to include several more exception handlers. The other exception classes are in the same location as the OperationalError class.