Issues with storage of data in stack

Lagi Pittas iphonelagi at gmail.com
Mon Mar 12 13:31:06 EDT 2018


Hi Mark,

Thanks for the detailed explanation but I have a few (ish) questions ...

Hope you don't mind me asking these questions because I did have to
write my own access routines in those bad old days before I started on
Clipper/Foxpro/Delphi/Btrieve  and I do enjoy learning from others on
the list and the forums - those AHA! moments when you finally get how
the Heapsort works the night before the exam.

Many moons ago I wrote a multi-way B-TREE based  on the explanation in
Wirth's Book "Algorithms + Data Structures = Programs" -  in UCSD
Pascal for the Apple 2,  I  had a 5MB hard Drives for the bigger
companies when I was lucky, for the smaller companies I made do with 2
143k floppy disks and Hashing for a "large" data set- oh the memories.
I used   the B-Trees  if the codes were alphanumeric. I also had my
own method where I kept the index in the first X Blocks of the file
and loaded the parts in memory as they were needed - a brain dead
version of yours I suppose.  I think we had about 40k of free ram to
Play with so couldn't always keep everything in RAM. I even made the
system multi-user and ran 20 Apple ][s on a network using a
proprietary Nestar/Zynar network using Ribbon Cables -  it worked but
am I glad we have Ethernet!

Anyway - I digress. I can understand the general idea of what you are
explaining but it's the low level code for writing to the
clusters/file on disk I'm not quite sure of.
Which way do you build your initial file? Is it "Sparse" or prebuilt,
or does each cluster  have a "pointer" to previous or next clusters?
Do you have records "spanning" clusters or do you leave any spare
space in a cluster empty. Do you mark a "record" as deleted but don't
remove the record until it's overwritten or do what Foxpro/Dbase does
and "PACK" them with a utility routine.
I also presume you use the "AT" option in the write command to write
the clusters randomly since you don't wriite the whole in memory table

Which brings me onto my final questions - I presume your system is
multi-user because you have a server program that receives calls and
executes them sequentially? And lastly what are the file size
limitations doing it this way - do You also virtualize the data in
memory?

Sorry for all the question but this is the interesting stuff

Regards Lagi

On 11 March 2018 at 20:02, Mark Talluto <mark at canelasoftware.com> wrote:
> Hi Lagi,
>
> Our LiveCode array database does not use SQL or any other database. In terms
> of local only database, we do not rely on anything but LiveCode. It is
> purely a LiveCode derived system. Data is manipulated using familiar
> methodologies to other databases. The data is encrypted using ‘encrypt’ and
> stored using 'arrayEncode()’.
>
> The full array of the database is stored in memory. This method provides
> very quick access to your data thanks to the amazing performance provided by
> LiveCode.
>
> This might get a little long. I am happy to take this off list for more
> details. I will try to be as succinct as possible.
>
> -A little more explanation on storing data-
> Each record is stored in array that looks like this:
> tableID/clusterID/recordID/recordData…
> When a given record/s is updated, we cache the recordIDs that were touched.
> All data is updated first in memory then cached to disk. We then refer to
> the cached records and conclude which clusterIDs we affected. Thus, you can
> very quickly save only the clusters that have been modified. Each cluster
> will have one or more records associated with it. The clusters are the first
> ’n’ characters of the recordIDs. We use UUIDs as recordIDs. The cluster
> sizes are definable, giving us the power to decide where to apply the
> optimization. Clusters of only one or two characters will generate less
> clusters to be stored. This makes loading of tables from disk to RAM very
> fast. A cluster of 3 chars or more allows for less records per cluster thus
> improving saving from RAM to disk to be faster.
>
> The pseudo code for this might looks like this:
> -receive request for update in your API
> -store the changes to your master array in RAM
> -remember the recordIDs touched in a variable
> -calculate the clusters touched by taking the first ’n’ characters of the
> records touched and make a new list of the clusters you need to write to
> disk
> -write appropriate clusters to disk
> -return the results of the action (any errors, recordIDs…)
>
> You will find this method to be very performant and easy to manage. This is
> not particularly complicated to write. Once you get it all working you might
> add other niceties like:
> -error checking the input before storing anything
> -store metadata on each updated record: recordVersion, recordTime,
> updateTime
> -add security using ‘encrypt’
> -build simple APIs to do your CRUD first
> -add other APIs as needed to make accessing your data easier
>
> Here is an example API for storing data that you may find useful when making
> your own system.
>
> -Input (array)-
> put “Steve” into tInputA[“firstName”]
> put “Jobs” into tInputA[“lastName”]
> put “rolodex” into tInputA[“cdbTableName”]
> put “local” into tInputA[“cdbTarget”] —We would use ‘cloud’ when we want to
> store offsite.
>
> Your system might verify that the keys ‘firstName' and ‘lastName' are actual
> keys. This is very SQL in terms of error checking. Or, do not check the keys
> and feel more noSQL in nature.
>
> From here we take the array and pass it to a function.
> put cdb_create(tInputA) into tRecordID —returns the unique UUID representing
> the recordID
>
> The ‘cdb_create()’ function basically runs the pseudo code proposed above.
> And voilà, you have your first record stored.
>
> I hope this gives you some ideas. We have successfully used this in
> enterprise level scenarios. We nightly interchange data with our customer's
> database. We have not run into any issues with IT in terms of the type of
> database we are using in our backend. I fully support your experimentation
> with arrays as a data store.  :)
>
> If you are interested in seeing other APIs we have found useful, here is a
> running list. Look under the API dropdown for more ideas.
> http://canelasoftware.github.io/cassiaDB_docs/
>
> -Mark
>
>
> On Mar 9, 2018, at 2:07 AM, Lagi Pittas <iphonelagi at gmail.com> wrote:
>
> HI Mark,
>
> I am intrigued by your way of saving only whats changed and also when
> you say save to disk after arrayencoded. Do you meanas a simple  save
> as a binfile or in an sqlite BLOB?
>
> I would really like to see some exampleish code on saving to disk - if
> it's other than a single array in a single binfile - I think even I
> can do that. But your other ideas sound brilliant.
>
> Regards Lagi




More information about the Use-livecode mailing list