having to help Rev (was: Re: Memory Leak on export png????)
Dave
dave at looktowindward.com
Fri Mar 23 11:49:42 EDT 2007
On 23 Mar 2007, at 15:00, Richard Gaskin wrote:
> Dave persisted:
>> On 22 Mar 2007, at 18:29, Richard Gaskin wrote:
>>> In the ten years I've been working with this engine this is the
>>> first verified leak I've seen. Let's be generous and say that
>>> maybe one or two others might have been discovered in that
>>> time. Even then, over a decade that's really quite good -- and
>>> accomplished without automated stress testing.
>> There were three problems, the leak was just one of them.
>
> The third "problem" (garbage collection appearing to only be done
> at idle) was based on a misunderstanding of results and turned out
> to have no supporting evidence in this case, as I noted earlier:
> <http://lists.runrev.com/pipermail/use-revolution/2007-March/
> 095651.html>
>
> That leaves only two, both of which require multiple iterations to
> be evident.
>
>>> The export command appears to work well when run once or even a
>>> dozen times. Unit testing should always be done, and in this
>>> case would yield a good result. Only a sustained test with a
>>> great many iterations will expose this specific problem, and
>>> only in the Rev IDE. The leak doesn't exist in a standalone or
>>> in other IDEs, and since some issues may be specific to
>>> standalones it would be necessary to run any soak tests in at
>>> least those two environments.
>> Not really if you were to write files 1 to 300 you would hit it
>> at 288 and I had it happen earlier than that to start with.
>
> Agreed: more than 288 iterations would be needed to see that problem.
Not necessarily. For instance others have reported it happening at less.
>
>> In fact the memory leak would be visible straight away, all you
>> have to do is run once it and look at the memory allocations.
>
> That memory fluctuates while a program is running is normal. The
> cumulative effect in which some of that memory isn't released is
> only evident with multiple iterations.
Not if the memory allocated is large as is the case of images and
movies.
>
>>> Consider all the effort to design, build, and run these tests,
>>> release after release, year after year, and after a decade of
>>> that we might turn up a leak or two and maybe a relative handful
>>> of other errors of the sort which can be exposed through that
>>> specific form of testing.
>> Took me 10 minutes to build the test for the export snapshot
>> command and 2 minutes to run it. On the first part I was working
>> on (last week) it took me 30 minutes to build the tests and about
>> the same to run the tests. I then ran it at least once a day
>> (after I'd added/ changed things) to make sure I hadn't broken
>> something.
>
> Hindsight. I'm sure you're aware that good soak tests commonly run
> far longer than 2 minutes, and in most cases for good reason.
>
> That this one isolated case was discoverable in less is as much of
> an anomaly as the rarity of the bug itself.
I agree, but the
>
>>>> Another problem here is that people may have different ideas on
>>>> what "Beta" means and I haven't seen it defined in terms of
>>>> RunRev. One company I worked for defined it as meaning
>>>> "Feature Complete, No Known Crashing Bugs".
>>>
>>> That's the ideal, but I've known no company that ships every
>>> Beta in a state that meets that definition.
>> Well, I've beta tested Photoshop and I AFAIK there were no known
>> crashing bugs and AFAIR it was feature complete.
>
> You wrote "Known" crashing bugs. That would require a level of
> testing rarely if ever possible in commercial application
> development. In fact other apps from Adobe (and other large
> companies as well) have been delivered to Beta with bugs which were
> discovered to cause crashes, and crashing issues sometimes even
> survive undetected into final builds.
Ok, this is just a language problem. By "known", what I meant was
that if a crashing bug was reported, it would be fixed before the
next beta was released.
>
>>> I've participated in tests for Adobe, Microsoft, Apple, Oracle,
>>> and others who have shipped Betas while still developing new
>>> features.
>> "Feature Complete" was just the way that company did it, I've
>> also seen that beta versions that still being developed. I was
>> trying to find out what "beta" meant in the wonderful world of
>> RunRev.
>
> It seems their definition is in keeping with industry norms.
Where is that defined?
>
>>> That you're able to run stress tests on all features and
>>> identify 100% of requirements to complete satisfaction before
>>> your first Beta is quite an accomplishment,
>> Could you tell me where I said that?
>
> The sum of your posts suggest an expectation for other developers
> of that level of effort. It would seem reasonable that such
> expectations are at least met in your own shop.
I was referring to requirements and beta testing.
>> I run Stress Tests while developing my software
>
> Great. Another 10,000 function points and your work will begin to
> approach the complexity of Rev.
And if RunRev stress tested while developing they would have a much
more solid product.
>
>> However stress testing is not an accomplishment at all it's
>> really easy
>
> ...for small programs, or with selective testing possible only with
> hindsight.
All programs are made up of lots of small pieces. You stress test
each piece.
>> that's why I really can't see why you are going on about it
>
> You might review your posts to see where this ongoing discussion of
> stress testing originated. You're welcome to stop going on about
> it at any time.
By "going on about it" I was surprised that you didn't think that
stress testing was worth the effort and that you were surprised that
I did it as "normal" practice. I really do just take it for granted
and had thought that almost everyone else would too. For instance,
the biggest software company I have worked for (Apple) the practice
of Stress Testing was taken for granted.
All the Best
Dave
More information about the use-livecode
mailing list