Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Encryption Government Privacy

NSA Allegedly Exploited Heartbleed 149

A user writes: "One question arose almost immediately upon the exposure of Heartbleed, the now-infamous OpenSSL exploit that can leak confidential information and even private keys to the Internet: Did the NSA know about it, and did they exploit if so? The answer, according to Bloomberg, is 'Yes.' 'The agency found the Heartbeat glitch shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency's toolkit for stealing account passwords and other common tasks.'" The NSA has denied this report. Nobody will believe them, but it's still a good idea to take it with a grain of salt until actual evidence is provided. CloudFlare did some testing and found it extremely difficult to extract private SSL keys. In fact, they weren't able to do it, though they stop short of claiming it's impossible. Dan Kaminsky has a post explaining the circumstances that led to Heartbleed, and today's xkcd has the "for dummies" depiction of how it works. Reader Goonie argues that the whole situation was a failure of risk analysis by the OpenSSL developers.
This discussion has been archived. No new comments can be posted.

NSA Allegedly Exploited Heartbleed

Comments Filter:
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday April 11, 2014 @04:22PM (#46729487)
    Comment removed based on user account deletion
    • > You're virtually begging them to find and then sit on dangerous exploits.

      That was their mandate in the first place. Nobody begged - It was an order.
    • How do you propose to separate them? Offense and defense are not really two separate things; if you can do one, you can do the other.
      • by AHuxley ( 892839 )
        Re How do you propose to separate them? Offense and defense are not really two separate things; if you can do one, you can do the other.
        Think back to past presidents views on parts of the the US intelligence community.
        JKF had is views on the CIA after the Bay of pigs.
        Rockefeller Commission, Church Committee, Pike Committee, Murphy Commission, the Select Committee on Intelligence and the Directorate of Operations events in 1977. The domestic activities, human experimentation issues and need for a ban on
    • by thoth ( 7907 )

      Why even have the same agency responsible for foreign electronic intelligence and put them in charge of "cyberdefence" (how I hate that term..).

      It's a massive conflict of interest. You're virtually begging them to find and then sit on dangerous exploits.

      Their "cyberdefence" mission is to defend DoD systems, not the entire world's computers.

      If you don't like it, gripe that NIST and DHS aren't doing their jobs (they are the agencies actually over commercial internet security and non-DoD government sites) or transfer/alter their authority. Everybody thinking the NSA is there to protect their banking and email all have the wrong idea of what they do.

  • by Anonymous Coward on Friday April 11, 2014 @04:24PM (#46729499)
    YOU SON OF A BITCH
  • by capedgirardeau ( 531367 ) on Friday April 11, 2014 @04:26PM (#46729531)

    I can understand this happening. It would make sense that the NSA would have someone or multiple people review every patch and check-in for a package as important as OpenSSH, just looking for exploitable mistakes.

    I would not be surprised if they review a great deal of FOSS software they deem important to national security.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      This is the dark side of the "with enough eyeballs, all bugs are shallow" theory. The eyeballs don't have to tell anyone else.

      The full source code is conveniently carried to NSA without them needing to bully any company. Then it is analyzed by genius hackers who are paid top dollar for the job. They probably already have a good stock of other OSS exploits too, which are unknown to the rest of the world.

      • by JDG1980 ( 2438906 ) on Friday April 11, 2014 @05:17PM (#46729873)

        Then it is analyzed by genius hackers who are paid top dollar for the job.

        "Top dollar"? This is a government agency. They pay based on the GS scale. Even if the NSA's security hackers were classified at GS-15 (the highest rate), that's about $120K a year to begin – if they really are "geniuses" then they could do better in Silicon Valley, and probably feel better about their jobs as well.

        In general, the GS scale pays somewhat more than typical private-sector rate for low-end jobs, but considerably less for high-end jobs.

        Government contractors rake in the dough, but that money goes to politically-connected businessmen, not rank-and-file employees.

      • by koan ( 80826 )

        But this presents an interesting challenge, how do you evade someone that has control of the network? Is it possible?

    • by Smallpond ( 221300 ) on Friday April 11, 2014 @06:04PM (#46730177) Homepage Journal

      This patch was submitted at 7pm on Dec 31st, 2011, so the only people looking at it were the ones expecting it. I guess they were not disappointed.

      http://git.openssl.org/gitweb/... [openssl.org]

      • by jhol13 ( 1087781 )

        I challenge anybody to review it and find (or notice) the bug.
        My point, once again, is: C should not be used for security sensitive programs, we should start using managed languages.
        I know, won't happen, because people are lazy and won't learn. Yet again we will think that this fix solves everything, that now OpenSSL is fixed. Which it most likely is not; I would be really surprised if there are no holes KNOWN (to some russian, chinese, israeli, usa, ... agency, or mafia).

        • by Urkki ( 668283 )

          I challenge anybody to review it and find (or notice) the bug.

          Wasn't this a plain and simple using un-sanitized data from packet received from the adversary (for code review purposes, all network data comes from an adversary)? Anybody doing serious code review should know to check for this and study code until sure all such values are handled safely, or reject change if code is too obfuscated to be sure. Anybody missing this should not be given any code to review responsibility without more experienced supervision.

        • I challenge anybody to review it and find (or notice) the bug.

          It's actually kind of easy to see. I just use the same trick I use when trying to read almost anyone's code: I assume that some jackass obfuscated all of his variable names and so I rename them as I figure out what they actually represent so that the new names actually describe the variable. Once that's complete, I'm left with "memcpy(pointer_to_the_response_packet_we_are_constructing, pointer_to_some_bytes_in_the_packet_we_received, some_number_we_read_out_of_the_packet_we_received)" and it immediately

          • I just use it because I hate everything else even more.

            So true

            ...but more seriously, the code in that check-in is why I hate to let anyone work on any programming projects with me.

            You can teach them how to do better.

        • Comment removed based on user account deletion
    • by neoform ( 551705 )

      You would think if the government was doing this, they would at least tell their fellow government agencies about the flaws, so that they would not be vulnerable to foreign hackers...

  • This sounds likely (Score:5, Insightful)

    by gurps_npc ( 621217 ) on Friday April 11, 2014 @04:29PM (#46729549) Homepage
    The basic fact is, if they did not exploit it, then someone working for them is thinking "DAMN, I wish I thought of using that!"
  • Why can we not start a class action lawsuit against the Government, NSA and those that allow snooping around in personal data without probable cause?

    • by raydobbs ( 99133 )

      One cannot simply sue a branch of the government without asking permission from the government to allow it to be sued - guess how often THAT happens? Plus is NSA has a built-in out; its in the interests of national security. Its bullshit - we all know it - but it a legal out, its the reason they can deny your FOIA request for information about Area 51, the Roswell incident, as well as the intelligence records on Jimmy Hoffa or J. Edgar Hoover.

      • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Friday April 11, 2014 @04:45PM (#46729651)

        One cannot simply sue a branch of the government without asking permission from the government to allow it to be sued - guess how often THAT happens?

        Glad you asked: it happens all the time, ever since the Tort Claims Act of 1948 substantially waived the sovereign immunity doctrine. You can read more about it at Wikipedia [wikipedia.org].

        People sue the government all the time. It's literally an everyday occurrence.

        • That doesn't provide an opened ended unlimited right to sue. I very much doubt you'd have an allowable claim for injury on this.

        • by raydobbs ( 99133 )

          ...and I learn something new every day. Thank you for sharing that without calling me a moron. I knew it had been a few years since my last political science class, now I have something new to read up on.

    • You need to have enough evidence of *something* possibly happening to show that you have standing to bring the case.

    • That's a good idea, because the least accountable branch of government is surely on your side! /s

      The judicial branch and the supreme court serve much the same purpose as the Tsar in old Russia. No matter how bad it gets, it's not the Tsar's fault. It's the noblemen's fault. The Tsar just has bad advisers. If only we could get past them and talk to him and make him understand, it'd all be OK.

    • To start with, because of sovereign immunity.

  • If you know about it and have access to virtually unlimited resources you can afford to attach to your target and do it as many times as you want in order to get what you want.

    And, frankly, I don't believe the guy that claims responsibility for the bug.

    As well, if something this simple could cause such an issue then clearly it is an issue for lots of other important security programs.

    • As well, if something this simple could cause such an issue then clearly it is an issue for lots of other important security programs.

      Yes, it's one of the most common memory handling bugs and is known as a buffer overflow [wikipedia.org], generally buffer overflows are difficult to exploit which can be seen in the fact that nobody has actually demonstrated extracting a key using this particular bug, just that it is "possible" to do so. Winning the lottery is also "possible".

      There's all sorts of complete bullshit about this bug in the press, to paraphrase what I read today in the WSJ that "It turns out that just 4 European developers and some guy in th

      • Private key grabbed. Game over.
        One successful attempt took >2.5M requests over a day. Second successful attempt was something like 100k requests.

        http://blog.cloudflare.com/the... [cloudflare.com]

        It's all in the luck of the draw. When you don't have any logging of this, you've got no idea how long people have been poking at this and literally no idea what anyone has made off with.

  • And what are the odds there aren't at least a half dozen other bugs as serious still to be found in the OpenSSL source code ...
  • We need to find out if the author of this bug is or was on the NSA payroll. It would not be surprising to find out he was paid to put it there.

    • by 93 Escort Wagon ( 326346 ) on Friday April 11, 2014 @04:44PM (#46729647)

      The author of this bug and the reviewer of the commit have both been very forthcoming about the mistake. There's little reason to suspect malicious intent in this particular instance.

      That doesn't mean the NSA didn't know about it or exploit it, though.

      • You are probably correct. Still.

        Heinlein's but don't rule out malice still applies.

        Look. I get that the NSA has these incredible resources (thousands of personnel, alone), but they're still all working for the government: the king of big company bullshit with a side of no incentive to work hard. I'll kiss a pimple on your ass if there aren't many hundreds of others' disenfranchised like Snowden who lack either the luxury of being able to leave or the courage to do so.... these folks commitment is plau

      • lol...Maybe he was sent a stack of cash with a USB flashdrive and a note "You know what needs to be done. Love, NSA"
    • by hawguy ( 1600213 )

      We need to find out if the author of this bug is or was on the NSA payroll. It would not be surprising to find out he was paid to put it there.

      The author responsible for the bug has already admitted that it was a mistake (and it's not like buffer overflows are unheard of, so it really is plausible). Sure, it's possible that the NSA secretly paid him (or ever coerced him by holding some incriminating evidence over his head), but it would likely take someone with the resources of the NSA to uncover such a secret NSA payout. Something of that nature probably wouldn't even be available in Snowden's document archive.

      • some clues might be buried in there somewhere, but until Snowden's "cache" is publicly released we'll never actually know...but I'm guessing The Guardian et al are currently combing through the archive looking for some references.
    • by ceoyoyo ( 59147 )

      The author of the bug probably introduced it accidentally. It's easy to do. The author of the special wrapper code in openSSL that purposely prevents newer versions of malloc from doing memory checking that would have revealed this bug a little more suspicious.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Friday April 11, 2014 @04:46PM (#46729661) Homepage Journal

    OK guys. We've promoted Open Source for decades. We have to own up to our own problems.

    This was a failure in the Open Source process. It is just as likely to happen to closed source software, and more likely to go unrevealed if it does, which is why we aren't already having our heads handed to us.

    But we need to look at whether Open Source projects should be providing the world's security without any significant funding to do so.

    • by Anonymous Coward on Friday April 11, 2014 @04:54PM (#46729723)

      The problem with open source when it comes to things like this is that there are so few people who are even qualified to implement protocols like this, and even fewer of them who are willing to work for nothing. The community needs to pony up some cash to have important projects audited like what they are trying to do with TrueCrypt right now.

      • by Bruce Perens ( 3872 ) <bruce@perens.com> on Friday April 11, 2014 @05:02PM (#46729769) Homepage Journal
        I'd say more than just the "community". We have a great many companies that incorporate this software and generate billions from the sales of applications or services incorporating it, without returning anything to its maintenance.I think it's a sensible thing to ask Intuit, for example: "What did you pay to help maintain OpenSSL?". And then go down the list of companies.
        • by l0n3s0m3phr34k ( 2613107 ) on Friday April 11, 2014 @09:01PM (#46731205)
          Exactly! Everyone can get to the source, the whole point of OSS is that the companies themselves can (and should, from a risk-analysis point) be reviewing all the code too before implementation...it's along the lines "you get what you pay for" yet at least here everyone is given the chance to see exactly what's being run (as opposed to pre-compiled apps). IMHO, this really isn't an OpenSSL issue as much as a failing of due diligence by all the companies using it. The admin's excuse of "well, we don't actually know what the code says" fails here, and anyone over the past two years could have reviewed it themselves and fixed this! Maybe this will spur corps to actually review code of critical infrastructure when it's avalible as part of corp policy from now on, perhaps the insurance companies who do "Errors and Omissions" policies will start forcing corps to do that; kinda surprised that this isn't already a standard policy, as code review of OSS is one of it's main strengths and if your company doesn't do it then their missing out on one of the biggest assets of using OSS.
      • by AHuxley ( 892839 ) on Friday April 11, 2014 @07:14PM (#46730655) Journal
        Re even qualified to implement protocols like this. Thats a very interesting point. How many have their tools of the trade via a top university settings and a security clearance option and dependant funding.
        Once you start down the math path the classes get smaller and fewer stay for needed years vs lure of private sector telco or unrelated software work.
        Most nations really do produce very few with the skills and keep them very happy.
        Trips, low level staff to help, good funding, guidance, friendships all just seem to fall into place.
        Bringing work home and helping open source could be seen as been an issue later vs students or team members who did open source games or made apps.
    • by hawguy ( 1600213 )

      OK guys. We've promoted Open Source for decades. We have to own up to our own problems.

      This was a failure in the Open Source process. It is just as likely to happen to closed source software, and more likely to go unrevealed if it does, which is why we aren't already having our heads handed to us.

      But we need to look at whether Open Source projects should be providing the world's security without any significant funding to do so.

      If it's just as likely to happen to closed source software, then why is it a failure of the Open Source process? It was discovered and fixed so quickly *because* it's open source - there may be similar holes in closed source software that are being exploited today, yet no white hats have discovered them yet.

      • Sure. We're better. But we need to be even better than that.
      • by Anonymous Coward

        It was discovered and fixed so quickly *because* it's open source

        For crikessakes, the heartbleed vulnerability existed for over 2 years before being discovered and fixed!

        • by hawguy ( 1600213 ) on Friday April 11, 2014 @06:31PM (#46730341)

          It was discovered and fixed so quickly *because* it's open source

          For crikessakes, the heartbleed vulnerability existed for over 2 years before being discovered and fixed!

          Sorry my bad, that sentence was confusing -- I meant the fix was fast, not finding the bug.

          An exact timeline for Hearthbleed is hard to find, but it looks like there was some responsible disclosure of the bug to some large parties about a week before public disclosure and release of the fixed SSL library.

          In contract, Apple learned of its SSL vulnerability [nist.gov] over a month [theguardian.com]before they released an IOS patch and even after public disclosure of the bug, it was about a week before they released the OSX patch. And just like the OpenSSL bug, Apple's vulnerability was believed to have been in the wild for about 2 years before detection. (of course, since the library code was opensourced by Apple, several unofficial patches were released before Apple's official patch).

          • I think we need to take a serious look at the "many eyes" theory because of this. Apparently, there were no eyes on the part of parties that did not wish to exploit the bug for close to two years. And wasn't there just a professional audit by Red Hat that caught another bug, but not this one?
            • I think we need to take a serious look at the "many eyes" theory because of this. Apparently, there were no eyes on the part of parties that did not wish to exploit the bug for close to two years. And wasn't there just a professional audit by Red Hat that caught another bug, but not this one?

              I'm going to say this calls into question the value of professional audits.

              My experience is that visual inspection of code does little to remove all the bugs. It's just really hard to muster the concentration needed to verify that the code is good with your eyes.

      • by ceoyoyo ( 59147 )

        Three years is quick?

    • this does not have anything to do with open source and all to do with the software development process (or lack of) used here: something like this could've happened in a closed source library just as easily, the only difference would be that rather than source analysis you'd have used other tools to find the vulnerability: if a new addition to a protocol comes in and you have bad intentions of course the first thing you do is to see what happens if you feed it invalid data, if you did that here you'd have f

      • and btw, funding is good, but funding does not buy you a good software development process: for that you need to actually focus on finding a good process first, and use the funding to achieve what you are planning without forgetting that if it's a critical piece of infrastructure nowadays it will be attacked by adversaries with much larger pockets than yours no matter how large yours are, so the process has to take into account that any development is done in a completely hostile environment, where a-priori

      • There was another person who looked at it before the commit, apparently they also missed it. So apparently a better code reviewing policy is also needed here.
    • Bugs happen. We should assume there are more bugs of similar nature and have a plan on how to proceed when the next one is discovered. It's particularly important for OpenSSL as it may affect not only SSL encryption keys but whole chains of SSL certs.
    • by bill_mcgonigle ( 4333 ) * on Friday April 11, 2014 @06:25PM (#46730301) Homepage Journal

      This was a failure in the Open Source process.

      Indeed. People have been saying for years that the OpenSSL code leaves much to be desired but nobody dares fix it because it might break something (needed: comprehensive unit tests).

      There's been a bug filed for years saying that the code won't build with the system malloc, which in turn prevents code analysis tools from finding use-after-free conditions. The need here is less clear - leadership of the project has not made such a thing a priority. It's not clear that funding was the sole gating factor - commit by commit the code stopped working with the system malloc and nobody knew or cared.

      Sure, a pile of money would help pick up the pieces, but lack of testing, continuous integration, blame culture, etc. might well have prevented it in the first place.

      We still have sites like Sourceforge that are solving 1997 problems, like offering download space and mailing lists when what we need today is to be able to have continuous integration systems, the ability to deploy a vm with a complex project already configured and running for somebody to hack on, etc.

      • by jhol13 ( 1087781 )

        "less clear"?

        Less clear my ass! I'd say there is no leadership in the project, unless "FUD" (fear of it breaking something) is called "leadership". But then as you say, "nobody cares".
        If the code is as you describe, the whole shebang should be rewritten from scratch using higher level managed language. Any managed language would have prevented the information leak although probably not the unchecked value.

        • In the process of rewriting, it's inevitable that a ton of brand-new bugs will be introduced in the new codebase, and you'll have lost all the time and effort hardening the library and fixing all of the thousands of previously exploitable issues.

          I think talk of scrapping or rewriting the library is a bit of an overreaction caused by the scale and scope of the issue, and is certainly not plausible in the short term anyhow. I'd say the proper thing to do is to halt development of new features for a time and

    • I hate to disagree with you, but this has nothing to do with Open Source, it has to do with software engineering.

      This same bug could have been introduced in closed-source software just as easily. The problem is making sure that software is securely reviewed before its disseminated, much like the OpenBSD people have been touting all these years, instead of just throwing things together however they work.

      The only part F/OSS played in this is that we *found* the bug and can identify exactly when and how it oc

    • In some ways it shows the success of open source software; the bug was only found by Google engineers because it is open source.

      But we need to look at whether Open Source projects should be providing the world's security without any significant funding to do so.

      I'm in favor of more funding for open source, but in this case I would still trust the security of the internet on open source long before I would trust it to closed source. I've seen what too much closed source looks like, and it scares me.

  • Fork it. (Score:5, Funny)

    by grub ( 11606 ) <slashdot@grub.net> on Friday April 11, 2014 @04:53PM (#46729711) Homepage Journal

    Theo de Raadt should fork OpenSSL. He could call it OpenOpenSSL.

    .
  • by Animats ( 122034 ) on Friday April 11, 2014 @04:53PM (#46729715) Homepage

    When this was supposedly "fixed" in OpenSSL, did the fix just fix this one known bug? A real fix includes fixing the storage allocator to overwrite all released blocks, so no other old-data-in-buffer exploit would work.

    • Bob Porter: We always like to avoid confrontation, whenever possible. Problem is solved from your end.

    • A real fix includes not rolling their own malloc, then fixing the bugs that were hidden by their badly written freelist which prevented people from reverting to a normal malloc.

  • According to who? (Score:4, Insightful)

    by radarskiy ( 2874255 ) on Friday April 11, 2014 @05:15PM (#46729851)

    Bloomberg is the reporting organization, so they can't bee the source. They name no sources, just "two people familiar with the matter", which could mean they asked me twice.

  • by JKAbrams ( 3613353 ) on Friday April 11, 2014 @05:43PM (#46730053)
    Actually I wrote this yesterday but was unable to publish it:
    ...
    I have not yet grasped the full scope of the implications of this bug, but if you take the stance that things that could have been done also has been done (imho the only safe assumption), is this a good characterization? Or are there any limiting factors that makes this impossible? Like for example the amount of memory that could be leaked while the application is running (as servers aren't restarted often) is certain information that is stored statically in memory potentially not reachable?

    During the last two years:
    1. Any/all certificates used by servers running openssl 1.0.1 might have been compromized and should be revoked (the big cert-reset of 2014?)
    2. Because of 1, any/all data sent over a connection to such servers might now be know by a bad MITM (i.e. for large scale: the various security services/hostile ISPs, local scale/targeted attacks: depends on who else happened to know, and this person/organization happened to be your adversary, looks unlikely, but who knows...)
    3. Any/all data stored in SSL-based client applications might have been compromised.

    From a users perspective - change all passwords/keys that has been used on applications based on openSSL-1.0.1? How to know what services? To be safe, change them all? Consider private data potentially sent over SSL to be open and readable by the security services?

    Thinking about the large-scale:
    For how long has the NSA been picking up information leaked by Heartbleed (assuming that they have at least since late evening the 7:th or early morning the 8:th seems a given)?
    -Not in the Snowden documents that has been revealed so far (absence of proof != proof of absence, but language might give a hint)
    -No report of unusual heartbeat streams being spotted in the wild (was anyone looking?)

    Let's assume for the sake of argument the NSA does not have people actually writing the OpenSSL code in the first place.
    When did they know about it's existence?

    time_to_find_bug = budget * complexity_of_bug / size_of_sourcecode * complexity_of_sourcecode * intention_to_find_bugs

    Where
    budget = manpower * skillset
    and
    time_to_find_bug < inf.
    when
    skillset >= complexity_of_bug

    Heartbeat bug:
    complexity_of_bug = low

    OpenSSL:
    size_of_sourcecode = 376409 lines of code (1.0.1 beta1)
    complexity_of_sourcecode = high

    NSA:
    intention_to_find_bugs = 1
    budget = $20 * 10^9 ?
        => manpower = 30k ?
           skillset = high

    Guesstimate: one to a few months -> early 2012 to go through the changes made to 1.0.1 building on earlier work already done on the 0.8.9 branch...
    ...
    Or to say it another way, I think it is safe to assume that, given the simplicity of the bug, NSA knew about Heartbleed in early on. The anonymous comments to Bloomberg gives nice confirmation of this.
    • Typo, meant the 0.9.8 branch of SSL of course.
    • I know that at my company we don't "restart servers" as much as re-deploy another VM instance, switch all trafrfic to that, then kill the old VM. Yet sometimes it is over a year between reboots of the actual ESX servers themselves, as they are sometimes hosting dozens of VM's on a single box.
  • by xvx ( 624327 ) on Friday April 11, 2014 @08:49PM (#46731135)
    Welp, that didn't take long. Looks like someone solved CloudFlare's Heartbleed Challenge [cloudflarechallenge.com] and got their private server key...
  • If the heartbeat message is stored in memory allocated near the top of the heap, then if the bug is being exploited, the server should be reading data beyond the top of the heap. If this bug has been extensively exploited, why have we not seen servers crashing every now and then? Or have we seen it?
  • Just a minor correction - my piece does indeed suggest that the OpenSSL developers have some strange priorities. However, it lays the larger blame at the companies that used OpenSSL, when all the information necessary to suggest that this kind of thing could happen was already available, and the potential consequences for larger companies of a breach are easily enough to justify throwing a little money at the problem (which could have been used any number of ways to help prevent this).
  • by pop ebp ( 2314184 ) on Saturday April 12, 2014 @05:45AM (#46732525)
    CloudFlare has retracted their statement that private key compromise is very hard. They started a challenge [cloudflare.com] and at least 2 people successfully got private keys from their Heartbleed-enabled server with as few as 100K requests. (I am sure that with some optimization, the number could be even lower.)
  • Has there been any cases where the leaked information has been usefull in pointing out flaws which lead to patching security holes?

  • "and today's xkcd has the "for dummies" depiction of how it works."

    Thank you, thank you, thank you! At last I get it. So simple. So fiendishly simple.

Trap full -- please empty.

Working...