Rev cgi & mySQL (again)

Bob Sneidar bobs at twft.com
Fri Jan 2 19:03:42 EST 2009


Hi JB.

I have always toyed with the idea of writing a paging query system  
which paged through a large SQL database. The idea would be that you  
only have one live cursor but 2 or 3 "pages" (cursors) forward and  
back. As the user scrolls past the current page you silently swap the  
pointer to the current page, flush the farthest page back and query  
the next forward page. This would give the user the "appearance" of  
good performance paging through the data while minimizing the workload.

Depending on what a user might do in your particular application,  
would determine how many pages you want to cache, and how many records  
per page you wanted to manage. For a 40,000 record database, you would  
only need 40 pages of 100 records each. That would be quite  
manageable. You could begin to cache all 40,000 records right out of  
the gate.

The big downside to any SQL based app is that all you really get out  
of the box is, well a box. A place to put stuff. You still need to  
program your own database engine, build your data structures, build  
the user interface around that, create your own sql statements for  
getting data into and out of "the box" etc. But some would call that  
an upside, so who am I to say? :-)

I have always thought that Revolution should have it's own proprietary  
data structure that had the API's for getting data into and out of the  
database, linked to Revolution objects directly. Perhaps that is what  
EQL is all about, I don't know. I haven't looked into it. But for  
Multiuser apps SQL is the only way to go methinks.

Bob Sneidar
IT Manager
Logos Management
Calvary Chapel CM

On Jan 2, 2009, at 4:49 AM, jbv wrote:

> Hi there
>
> I have a mySQL table with about 40000 entries. I also have a Rev cgi  
> 2.5
>
> script that sends (very) complex SELECT requests; in which the "WHERE"
> part can feature as much as 50 nested booleans winth "and", "or",
> "binary" etc
>
> For quite some time I realized that requests to mySQL slow down my
> scripts
> in a terrible way, both because of the complexity of the requests, and
> also because
> of (imho) a buffer size problem when the amount of data returned is
> huge...
>
> I've tried several tricks to improve the speed of my scripts (like  
> "one
> big request"
> vs "several small requests in a loop"), but with no luck...
>
> finally, I tried the following : I dumped the content of myTable as a
> text file, opened
> it in the Rev script, and did the selection of records inside a  
> "repeat
> for each line" loop.
> And to my surprise, the speed of the script improved to almost 40%
> (which is a lot for
> a script that used to take 5 to 6 sec, and now takes 3.5 to 4 sec)...
>
> Well, I don't know what conclusion to draw from this... Besides the
> obvious superiority
> of Transcript... I guess some wise and experienced guys will tell me
> that for sophisticated
> DB processing I should have switched to a better product (like
> Valentina) long time ago...
>
> But nevertheless I'm curious to know if anyone already faced the  
> need to
> (almost) completely
> drop SQL in favor of Transcript for DB data search...
>
> Best,
> JB
>
> _______________________________________________
> use-revolution mailing list
> use-revolution at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your  
> subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution




More information about the use-livecode mailing list