[revServer] process timeout issue

Andre Garzia andre at andregarzia.com
Fri Aug 6 13:37:15 EDT 2010


Jim,

I have some profilers running here in my PHP development in our test
machines. The cachegrind files are worth 250 MB after three days of turning
the system on. Be aware that this is the test machine, if something similar
was running on the production system, our logs and cachegrind files would
make the server run out of space in less than a week.

Thats why we have some modes here:

"normal" - only logs important stuff
"debug" - very verbose
"andre-daily-special" - we can basically time travel anyware and see all
that was happening inside the CPU, RAM and DISK. (xDebug, XHProf, Inclued
and lots and lots of logs)

On Fri, Aug 6, 2010 at 2:32 PM, Jim Ault <jimaultwins at yahoo.com> wrote:

> The one provision ( or gotcha ) that I would add to this discussion is
>        log files
>
> You should insure that log files to not become too large, especially if
> they are using an XML format that is very fluffy.
>
> I just had a client decide to program his ping schedule into my system for
> once every 3 seconds instead of the design of 50 per hour. One of my log
> files expanded to 42 Mb and caused a connection timeout ( greater than 10
> seconds ).  I now have taken protective action to protect my dedicated
> server, but the shared hosting part of the network cannot be changed ( the
> admin company is not willing to do this )
>
> Most log file logic is to add new transactions to the end of an existing
> file, thus 2 Mb requires more time than 100K.  Some servers start new log
> files every calendar day, but others may have a much lower frequency.
>
> You may not be able to control this frequency, but you might be able to
> switch modes.
> Sometimes all events are logged and designated 'verbose' or 'debugging
> mode'.
>
> If you are on a shared host, you may not have the permissions to edit or
> delete these files.  On your own server, you should investigate if the log
> files could expand quickly, especially if you are doing a week of intense
> testing and programming.
>
>
>
>
>
>
> On Aug 6, 2010, at 9:55 AM, Mark Talluto wrote:
>
>  On Aug 4, 2010, at 7:53 AM, Richard Gaskin wrote:
>>
>>  I've been experimenting with spidering, data mining, and analytics, and
>>> like any processor-intensive tasks it would never occur to me to put them on
>>> a shared host.
>>>
>>> Like many hosts, the one I'm using offers dedicated servers for less than
>>> $70/mo, but being a cheapskate I've gone one step further during this
>>> experimental phase:  I bought a nettop off Ebay for just $150, set it up
>>> with Ubuntu and Rev, and that does all the heavy lifting 24/7, posting only
>>> the output from those process to my servers periodically as needed.
>>>
>>> I never run into the CPU cycle limits most hosts have on their servers,
>>> and I don't even slow down my own web server from its tasks of serving pages
>>> to my visitors and handling their purchases.
>>>
>>> When the workflow expands to required tighter integration between the
>>> processing and the output, I can move the system from my office to a
>>> dedicated server with multiple redundant fat-pipe connections for just a few
>>> bucks a month.
>>>
>>> There are a million ways to create robust scalable infrastructures to
>>> handle any load.  Many are cheap and easy to do, and for most of those tasks
>>> you can do them all in one fun language.
>>>
>>
>> We have been using this technique for years.  We even posted the
>> application we use to do this task in RevNet.  I believe I need to update
>> that file now that I think of it.  But in short, we use our ISP to gather
>> orders.  Our client software sends a request for a key.  Our local computer
>> in the lab just pings the directory on the ISP every 4 seconds and downloads
>> all the orders in that given directory.  The heavy lifting and database work
>> is done on a computer in the lab.  The key is then sent back up to the ISP
>> where the client computer is checking in for the result of that work every 4
>> seconds.  The whole thing works out nicely and we keep our CPU usage low.
>>
>> Mark Talluto
>>
>
> Jim Ault
> Las Vegas
>
>
>
>
> _______________________________________________
> use-revolution mailing list
> use-revolution at lists.runrev.com
> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-revolution
>



-- 
http://www.andregarzia.com All We Do Is Code.



More information about the use-livecode mailing list