parser

Classes for working with MySQL, Oracle, MSSQL and PgSQL

Author: Misha v.3 [January 28, 2013]
Version: 2.8
Tags: SQL

It is recommended to read the article about compatibility of SQL queries.

It happened so that I was overly dissatisfied with the functionality of standard ^table::sql{}, ^hash::sql{} & Co. For example, I wanted to occasionally get information on the time all queries took to be processed while the document was formed, record “slow” queries, and cache the result of complex queries.

Since I deemed it incorrect to make changes directly in Parser’s code, I decided to write SQL classes, which would provide the functionality I needed.

To use appropriate class I put the next code into method @auto[]: (of cause I mustn’t forget to include this class by means of @USE or ^use[]):

$pSQL[^mysql::init[$SQL.connect-string;
	$.cache_dir[/../data/sql_cache]
]]
And then I have, in $pSQL, an instance of class mysql, the methods of which may be addressed from any place in Parser code:

@main[]
<html>
^pSQL.server{
^rem{ *** retrieve a part of data into Parser table $tPage *** }
<head>
	<title>$tPage.title</title>
</head>
	...
	^rem{ *** all methods which can perform database queries, since we’re already connected *** }
	^printPageMenu[]

	^rem{ *** add an entry to table info *** }
	^pSQL.void{INSERT INTO info (name) VALUES ('$form:name')}

	^rem{ *** retrieve last_insert_id() and write it into the field sort_order of the last inserted entry *** }
	$tmp[^pSQL.set_last_insert_id[info;sort_order]]
	...
}
</html>

Note: I always use one ^connect[] per page (with ^SQL.server{} starting at the beginning of the page and ending at the end). This rule holds true except for those cases when I need sql transactions (by the way, with the new classes, you don’t have to place sql queries inside server{} anymore).

To receive debug information, the $pSQL object should be initialized in a different way:

$pSQL[^mysql::init[$SQL.connect-string;
	$.is_debug(1)
	$.cache_dir[/../data/sql_cache]
]]

and the statistics can be received by means of a method placed in @postprocess[]:

@postprocess[sBody]
$result[$sBody]
^if(def $pSQL){
	^if(def $form:mode && ^form:tables.mode.locate[field;debug]){
		^rem{ *** if addressing ?mode=debug we receive and save information on all sql queries on page *** }
		^pSQL.printStatistics[$.file[/../data/sql.txt]]
	}{
		^rem{ *** by default, we store to log only the information related to slow queries and pages with multiple queries *** }
		^pSQL.printStatistics[
			$.file[/../data/sql.log]
			$.debug_time_limit(500)
			$.debug_queries_limit(25)
#			$.debug_result_limit(3000)
#			$.is_expand_exceeded_queries_to_log(1)
		]
	}
}

If the document is addressed with ?mode=debug, information on all sql queries we had to form it through object $pSQL will be stored to file sql.txt. In usual mode, whenever an sql query takes more than 500ms, or the number of queries put in page generation exceeds the limit specified in debug_queries_limit, a record will be made in sql.log.

This feature allows me to always use $.is_debug(1) initializing and make all sql queries through object $pSQL. I also occasionally check sql.log and see which queries were poorly written and must be fixed.

It’s worth mentioning that, when outputting debug information with mysql, every query is accompanied by an appropriate explanation, which is also recorded in the log-file. This makes reading the log easier in that the problem becomes clearer. It’s also worth mentioning that only problematic issues and statistics are accompanied by explanations, and $.is_debug(1) is used for all queries only to measure time/memory taken by each query. This makes the whole procedure less time- and memory-consuming.

And, in conclusion, let me say a word or two about caching the results of the queries.

Working with SQL server is very convenient: the stored information is structured and may be retrieved in a specific way. Sometimes, however, it is difficult, if not impossible, to optimize a complex query without overcomplicating it. If such a query operates data which is rarely modified, the simplest way out of it is to cache its results.

Actually, SQL servers provide the machinery of caching the queries but “poultry farmers” don’t always have access to its settings. In this case, the sql classes we are discussing may come in handy and are easy to use:

$tReqult[^pSQL.table{
	SELECT
		...
	FROM
		table_1,
		table_2,
		table_3
	WHERE
		...
	GROUP BY
		...
	HAVING
		...
	ORDER BY
		...
}[
	$.limit(10)
][
	$.file[hardcore_query.txt]
	$.cache_interval(1/24)
]]

In this case, a complex sql-query will be performed and its results will be stored in file hardcore_query.txt. When the results of the query are needed (e.g. when another visitor requests the page), the sql-query will not be repeated. Its results will be taken from the saved file.

The specified caching parameter indicates that the lifetime of the results saved to file expire every hour. The file will then be deleted and a new visit will initiate a new query, the results of which will be stored to file, and then the new caching cycle will be started.

Some more parameters may be used: $.cache_expiration_time[time] will clear the cache-file with the first address after the specified time. This parameter may be used together with cache_interval. In this case, the cache file will be deleted once per specified interval as well as when the expiration time comes.

To force deleting the cache file, the following method should be called:

^pSQL.clear[hardcore_query.txt]

It may be useful when you work in administrator interface and you need to force deleting cache file, as long as the data in the related tables is modified.

IMPORTANT!
The classes use some operators defined in lib.p (unfortunately, it's documentation hasn't been translated yet, so you can just download the needed class and use it).

Download:

Sql.zip (13.01.2014  16 KB)
Classes for working with MySQL, Oracle, MSSQL and PgSQL