WannaCry [OT]

Mark Waddingham mark at livecode.com
Thu May 25 05:38:19 EDT 2017

On 2017-05-19 18:02, Bob Sneidar via use-livecode wrote:
> I don't think it's a matter of programming standards. The methods used
> to exploit systems are almost always something you could never have
> guessed. Flaws in code can be extremeny difficult to see, as was the
> case in the SSL Heartbleed bug. None of the devs saw the bug when it
> was approved for merging. Seeing what it was and what it ought to have
> been would be like seeing a needle in a haystack. I have thought for
> some time that it is the nature of digital information and our human
> minds incapacity to comprehend it in its real form that makes it
> nearly impossible to produce "unhackable" code.

I'm not sure this is correct - and it is important that we don't 'lull 
ourselves into a false sense of security' by assuming that 'oh we could 
never have guessed that'.

The reality is that whilst exploiting a vulnerability in general is 
REALLY HARD (seriously, when I say REALLY HARD, I mean REALLY REALLY 
REALLY HARD and this is why you only tend to see exploits in things 
which have a very large reward for making that exploit - hackers have to 
consider ROI too!) - all they need is a vulnerability in the first 

Eliminate the chance for vulnerabilities and you eliminate the 
possibility of exploits. Of course complete elimination is the ideal, 
but generally if you minimise the chance of introducing a vulnerability 
to the absolute minimum and you hugely reduce the chance of an exploit 
appearing (because finding vulnerabilities to exploit becomes much much 

Simplifying matters a bit, you can pretty much divide vulnerabilities 
into two classes:

   1) vulnerabilities introduced because of how something is written

   2) vulnerabilities introduced because of how something is done

The latter class (2), I will concede are much harder to spot. So called 
'information leakage' is a good example of (2) - this is where the 
method you use to do something causes 'secrets' to leak into an 
accessible channel. The thing is that such leakage can be caused by 
stuff the processor does (unreset registers in a call to a critical 
function, sideline data accessible due to the way HyperThreads are 
implemented in processor cores etc.). This is of critical concern in 
security stacks (such as SSL and strong encryption implementations) and 
is why the universal advice given is: never implement such things 
yourself - use a library which has the involvement of cryptography and 
security experts or employ such a person to do it for you.

The former class (1) essentially all boil down to mistakes in coding 
which mean that a suitably motivated hacker can use the mistake to 
execute arbitrary code which they have written - one of the biggest 
classes of these is 'buffer overruns':

   int main(int argc, char *argv[])
      if (argc != 2)
        return 0;

      char t_buffer[32];
      sprintf(t_buffer, "Argument 1: %s", argv[1]);

      fprintf(stderr, "%s\n", t_buffer)

Here I have a chance of being able to construct a string passed as a 
command line argument to my program which could execute arbitrary code 
encoded in the string I passed in - because I am potentially able to 
overwrite the stack at critical points to execute something that was not 

Another class of (1) is failure to sanitize inputs:

   int main(int argc, char *argv[])
      if (argc != 2)
        return 0;

      char t_buffer[1024];
      snprintf(t_buffer, sizeof(t_buffer), "DELETE %s FROM 
MyImportantDatabase", argv[1]);

      RunSQL(t_buffer); // Mythical call for illustration only

Here I've constructed an SQL query by inserting an unescaped string 
directly into an SQL statement that I execute. With this I can do 
anything to the database I like - just by using ';'.

I'd put my neck out and say that all vulnerabilities in case (1) can be 
prevented by strict programming standards and review (or better) using a 
language which doesn't let you make those kinds of mistakes in the first 

So, in short, I'd perhaps suggest that all exploits we see are caused by 
one of two things:

   (1) Using tools which are too low-level for the job at hand, or people 
using tools which they are not experienced enough to use fully and 
absolutely correctly.

   (2) Writing code to do a task when you do not have enough 
domain-specific knowledge to do it correctly.

Just my two pence :)

Warmest Regards,


Mark Waddingham ~ mark at livecode.com ~ http://www.livecode.com/
LiveCode: Everyone can create apps

More information about the use-livecode mailing list