benches // Stress-testing SQLite

Ruslan Zasukhin ruslan_zasukhin at valentina-db.com
Sun Oct 31 04:42:54 EDT 2010


On 10/31/10 9:55 AM, "Pierre Sahores" <psahores at free.fr> wrote:

Hi Pierre,

> If a test could be setup in benchmarking the same test database set to run as
> :
> 
> - PHP+ Oracle 11g
> - PHP+PostgreSQL 8.2
> - PHP+ Valentina
> 
> - LiveCode server+Oracle 11g
> - LiveCode server+PostgreSQL 8.2
> - LiveCode server+Valentina

Oracle and Postgre are servers.

> in using less expensive comparable hardware configs alike :
> 
> iMac 27" I7 quad core 2.8 Ghz and an equivalent Desktop PC
> 
> to test the respective performances of the app's servers+databases
> 
> against Linux, OSX SL and Solaris 10,
> 
> I'm not sure at all that PostgreSQL would be slower than Oracle 11g, on both
> the OpenSuse 11 and OSX SL platforms and it would be interesting to know how
> Valentina performs for its own against both PostgreSQL and Oracle (would it be
> faster, as it's presented to to be on the http://www.valentina-db.com/ site
> ?).

Personally me never did benches against Oracle or Postgre.
We have info from our users for last, let's say 4-5 years.
This info allow us do indirect comparison.

1) Valentina was FASTER of mySQL in many times, like of any other row based
DB.

Hmm, I do not want repeat info which can be found easy public, e.g. why
columnar db can be faster.

Easy example is:  Richard want to have 23GB table with 5 million recs and 20
fields.  If to have this Table in ROW based db as Oracle, Postgre, MS SQL,
mySQL, SqlLite,  then this table need 23GB at least, or most probably x1.5
times, because page-storage is used. So  on disk it will use most probably
30-35GB.  Only table without indexes.

If row based DB needs to scan column F1 of Table, then it needs to load from
disk that 30GB.   

Columnar DB needs to read only this field itself. So if f1 is ULONG (4
bytes) we have to read only 5M recs * 4 bytes = 20Mb from disk. You see?
  30GB / 20Mb = x1500 times  win.

Not bad? 

If normal HDD give you 30Mb/sec to read,
        20MB to read from disk  is        <1 sec
        30GB  to read from disk is     1000 sec   -> 15 min


** OR For a Boolean field, Valentina need to read only  625Kb against
And you can get  (wow!)  x48,000 times  speedup  on this field.

Of course this is extreme values. But they can be valid in some cases.


** And again, this is only ONE OF many factors why you get speed ups in
Valentina. Another can be found from DataModel and unique tools as ObjectPtr
and BinaryLinks. They give easy additional x4-x8 speed up on joins. And so
on.


-----------------
2) Postgre always was pointed as "tortilla" comparing to mySQL.
A lots of developers have told this public...

Last year more people go to Postgre mainly because of mySQL license and
Oracle ownership.



-----------------
3) Oracle vs Valentina
        Oracle is famous in its scalability.
        We not going win here so far :)

But speed ... 

One Korea team have told us they do next:

* EXPORT from Oracle data
* IMPORT them to Valentina DB using Vstudio
* Do different searches using Valentina

And together this was faster than do that searches in Oracle.

Oracle is not stupid. It is one of the most cool things. But it have to
solve other tasks... They fight for support of thousands users around
server. As result they have overhead in disk files which we do not have.


Btw, about 2-3 years ago some Oracle developers have go away and make new
company with new columnar DB - Vertica. I can assume some things in Vertica
beat Valentina DB.  For example, we have no yet compression of indexes.  But
Vertica costs so much more of Valentina that we play in very different
segments of market.


-- 
Best regards,

Ruslan Zasukhin
VP Engineering and New Technology
Paradigma Software, Inc

Valentina - Joining Worlds of Information
http://www.paradigmasoft.com

[I feel the need: the need for speed]





More information about the use-livecode mailing list