MIT Research: Encryption Less Secure Than We Thought 157
A group of researchers from MIT and the University of Ireland has presented a paper (PDF) showing that one of the most important assumptions behind cryptographic security is wrong. As a result, certain encryption-breaking methods will work better than previously thought.
"The problem, Médard explains, is that information-theoretic analyses of secure systems have generally used the wrong notion of entropy. They relied on so-called Shannon entropy, named after the founder of information theory, Claude Shannon, who taught at MIT from 1956 to 1978. Shannon entropy is based on the average probability that a given string of bits will occur in a particular type of digital file. In a general-purpose communications system, that’s the right type of entropy to use, because the characteristics of the data traffic will quickly converge to the statistical averages. ... But in cryptography, the real concern isn't with the average case but with the worst case. A codebreaker needs only one reliable correlation between the encrypted and unencrypted versions of a file in order to begin to deduce further correlations. ... In the years since Shannon’s paper, information theorists have developed other notions of entropy, some of which give greater weight to improbable outcomes. Those, it turns out, offer a more accurate picture of the problem of codebreaking. When Médard, Duffy and their students used these alternate measures of entropy, they found that slight deviations from perfect uniformity in source files, which seemed trivial in the light of Shannon entropy, suddenly loomed much larger. The upshot is that a computer turned loose to simply guess correlations between the encrypted and unencrypted versions of a file would make headway much faster than previously expected. 'It’s still exponentially hard, but it’s exponentially easier than we thought,' Duffy says."
What does this have to do with Computors? (Score:5, Funny)
I thought this was News for Nerds, but instead we are reading about Math, which is some kind of religion, and I am an Atheist.
Re: (Score:2)
Math is the One True System. Or for logicians, the one True system.
good news for NSA (Score:5, Interesting)
Re:good news for NSA (Score:5, Insightful)
I severely doubt this is news to the NSA.
Re: (Score:3, Interesting)
Re:good news for NSA (Score:5, Informative)
I read the article. The impression I got was that it will still take the same time today that it would have taken yesterday to break encryption, but it turns out that the metric used to demonstrate an algorithm's effectiveness at hiding information was inadequate for electronic communication. In a nutshell, the latest math explains that most encryption systems are vulnerable to side-channel attacks, even if you might not have realized it. But side-channel attacks have been employed for a long time, so those who do security already knew this anecdotally.
Re:good news for NSA (Score:5, Insightful)
I'll undo my moderation in this thread just to tell you that you are wrong. One cannot determine the key from the ciphertext. If they can this is known as a "break" in the cipher.
A "break" in a cipher does not mean that it is practical to find the key, merely that it is more feasible than mere brute force. For example, a "break" could reduce the effective strength of a cipher from 256 bits to 212 bits under a known plaintext attack. This is a BAD break in the cipher given current standards, but it is the cipher is still completely uncrackable in human (or even geologic) timescales.
The "weeks or months" number, by the way, has nothing to do with cracking cryptographic keys. I would surmise that is a number more geared towards cracking passwords, which is an entirely different topic. Also, for some realistic numbers on cracking encryption keys, check out Thermodynamic limits on cryptanalysis [everything2.com]
Re:good news for NSA (Score:4, Insightful)
Actually, you're both wrong.
For certain types of encryption, you are right - a known-plaintext attack that easily reveals the key is a fatal problem for the encryption method. This is true of AES, for example. The converse is also true - currently, knowing the plaintext and encrypted values for an AES-encrypted block of data does not let an attacker determine the encryption key in a reasonable amount of time. It still requires testing every possible key to see if it produces the same encrypted block given the known plaintext.
Other types of encryption are absolutely vulnerable to known-plaintext attacks. I'm less familiar with this area, but certain common stream ciphers (like RC4) are literally just an XOR operation, and so if you know the plaintext and ciphertext, you can obtain the keystream by XORing them together.
Re: (Score:2)
Some stream ciphers are as you say, but the keystream is not the same as the underlying key. One can't guess the next character in the keystream without deriving the key. Most modern stream ciphers use internal feedback much in the same way that block ciphers use external feedback modes, like CBC, to prevent these attacks.
In any system without feedback like this it is always considered insecure to re-use a key at all.
Re: (Score:2)
We have no reason to believe that, despite the resources of the NSA, that they are significantly ahead of the public face of encryption technologies. In fact, it has been noted numerous times that cryptographers working for the NSA aren't paid nearly as well as the private sector positions;
It's reasonable, then, to assume, that the NSA doesn't have any magic secrets other than gag orders alleged by affected parties [arstechnica.com].
Re: (Score:1)
This is hardly news at Fort Meade. If we're hearing about it now, the NSA probably has had the same knowledge for years.
Re:good news for NSA (Score:5, Interesting)
Maybe, maybe not. Consensus has shifted, and many researchers no longer believe that the NSA has the best and the brightest, or that they possess much fundamental cryptographic insight not already available to civilian researchers.
When the NSA tried to sneak a back door into an optional random number generator specified in a recent NIST specification, they were almost immediately caught by academics. http://en.wikipedia.org/wiki/Dual_EC_DRBG
On the other hand, operationally they're clearly second to none. Security engineering and penetration involve much more than basic mathematical insight.
Re: (Score:1)
On the other hand, operationally they're clearly second to none. Security engineering and penetration involve much more than basic mathematical insight.
Edward Snowden proved the first point wrong and the second point right.
Re:good news for NSA (Score:5, Funny)
When the NSA tried to sneak a back door into an optional random number generator specified in a recent NIST specification, they were almost immediately caught by academics. http://en.wikipedia.org/wiki/Dual_EC_DRBG [wikipedia.org]
They probably should have taken lessons from Xerox if they wanted to embed random numbers in documents.
Re: (Score:3)
I'm not sure what the intent was with Dual_EC_DRBG! It's a bit silly to believe it was "sneaking in a backdoor" because (1) people figured it out using techniques the NSA knew were public, and more importantly (2) the dang thing is so slow there's no way anyone ever would have used it in the first place.
The first you can argue was NSA arrogance, but the second? The second is just weird. I could believe the NSA trying to sneak in a backdoor, but one that obviously no one would use? I don't even?
Re: (Score:3, Insightful)
If the NSA was only concerned with open source cryptographic products and protocols, you would have a point. But aside from government procurement, NIST standards are in practice used to specify deliverables for corporate security products. Getting Duel_EC_DRBG into a NIST standard is the equivalent of putting a backdoor into an ISO standard for door locks.
Once in the standard, the NSA can then lean on vendors to use the broken algorithm, and the vast majority of users of that product would be none the wise
Re: (Score:2)
OK, but why on earth would the NSA need a backdoor into a US government-procured system? They have the key to the front door!
And again there's the "far to slow to actually use" thing. It's 100 to 1000 times as slow as the other choices IIRC.
Times have changed (Score:3, Interesting)
I don't have insider knowledge, this is just speculation based on societal trends. Where cryptography used to be the almost exclusive realm of governments to protect their secrets, it is now quite mainstream. Encryption protects e-commerce transactions among other things that are useful for the average person and vital to our businesses. It is now a field that university researchers pay attention to (where only cryptographers under the employ of spy agencies did previously) and companies spend their own mon
Re:good news for NSA (Score:4, Informative)
It’s still exponentially hard
.
Re:good news for NSA (Score:4, Insightful)
And, if you let them, the NSA will be owning exponentially expensive taxpayer-funded stuff that is then used to spy on taxpayers.
Re: (Score:3)
But at the same time
It’s still exponentially hard
.
Maybe stating the obvious...."exponentially" isn't a synonym for "very". How hard it is depends on what the base and the exponent is.
Re: (Score:2, Insightful)
Bad news for the NSA. Known insecurity can be fixed either through patch or brute force (bigger key). The NSA, I'm sure, prefers secret insecurity.
Re:good news for NSA (Score:5, Funny)
Re: (Score:3)
Re:good news for NSA (Score:5, Funny)
Um... Zeno died of an arrow wound trying to prove that.
"I used to believe in an infinitely divisible universe like you,
then I took an arrow in the knee."
- Zeno
Re: (Score:3)
This works only if the content is only encrypted _once_.
If you encrypt it twice, there will be no correlation, no recognizable content.
Re: good news for NSA (Score:2)
Re: (Score:2)
I'm no cryptology expert, far from it, but that was my first thought as well. If you can analyze the data by using guesses as to what the unencrypted data looks like, then encoding twice would make that magnitudes more difficult, as you'd have to analyze the output of every conceivable key.
Re: (Score:2)
Re: (Score:2)
I strongly suspect that you do not understand the meaning of "exponential" in the mathematical context appropriate for this subject.
I gather that there may be a dilution of the meaning in slang to equate "exponentially" with "a lot". That is slang's problem.
Just Great (Score:5, Funny)
Just great, Now instead of 100 Quintillion years, it's only going to take 100 Trillion years to decrypt my porn
Re:Just Great (Score:4, Funny)
I have changed my key from '1234' to '123456' to mitigate this...
Re: (Score:1)
That's amazing! I've got the same combination on my luggage!
Re: (Score:1)
Re: (Score:1)
I know, which is why I use the code 12345678: to be different and have a combination that is harder to guess because no one else would have imagined it. Ever.
Re: (Score:1)
you're not funny. just stop trying.
Re: (Score:3)
Mine is hunter2. And I know it's safe because it looks like ******* to you.
Huh? (Score:4, Insightful)
What correlation between the plaintext and cyphertext are they talking about?
Also, I think there is a theorem about modern crypto systems that says if you can guess one bit, the rest doesn't get any easier.
Re:Huh? (Score:5, Interesting)
Any correlation between plain and cipher. For instance if you can deduce that a particular string will occur at a particular point in the plaintext, then you can isolate the cipher equivelant and use that as a lever to break the rest of the ciphertext. You dont have to deduce it with certainty for this to be important, even if you have to try and discard a number of possible correlations before you find one that holds up.
This is a pretty basic old-school cryptographic method, kind of fun to think that fancy-pants mathematicians have been missing it all these years.
Re: (Score:3, Informative)
There is no "cipher equivalent", unless you're doing something stupid like using ECB mode. [wikipedia.org]
No modern encryption scheme works by simple one-to-one substitution; you use a nonce [wikipedia.org] or an IV [wikipedia.org] with a chaining mode so that even if the same plaintext appears several times, either in the same document or over multiple messages, it will "never" (neglible chance) encode to the same value twice.
Re: (Score:2)
Short of breakthrough in quantum computing modern crypto is secure. If you are using AES-256 or anything else FIPS certified - you are still going to be OK.
Re: (Score:2)
Shannon entropy and unicity distance [wikipedia.org] has more to do with provably unbreakable system than practically unbreakable. Why is a one-time pad unbreakable (assuming a good RNG)? When can a shorter key be unbreakable? What's the minimum key length needed to make an ideal cypher unbreakable for a given plaintext? Why is compression before encryption so important, how exactly how important is it?
Purely academic questions like this are mocked by engineers in every field, but it's that sort of pure research that
That's why you shouldn't use plain text (Score:5, Funny)
Re:That's why you shouldn't use plain text (Score:4, Funny)
Re: (Score:2)
Even better: the old DOC format was partially binary, and partially executable.
Re: (Score:3)
Re: (Score:2)
Out of my field, but IIRC modern crypto systems aren't just substitutions that leave the cyphertext for a character in the same place as the plaintext. Everything gets scrambled all around.
Re: (Score:2)
There is a world of difference between practical breaks and theoretical ones. OF course there have been plenty of practical breaks as well. But at this point, this has not lead to one, and not really sure if it would lead to better breaks.
Re: (Score:3)
Re: (Score:1)
Well actually: If you guessed one bit correctly and you knew this, you would have made the problem half as easy. But maybe I just understood you wrong, so feel free to correct me. ;)
Re: (Score:2)
True, but narrowing 2^1000 possibilities for the plaintext down to 2^999 doesn't feel like a lot of progress.
Re: (Score:2)
Re:Huh? (Score:5, Informative)
As usual, the paper [arxiv.org] makes more sense than the press release, but is less grandiose in its claims.
It's a fairly technical result that finds some appeals to the asymptotic equipartition property [wikipedia.org] lead to too-strong claims, compared to a more precise analysis.
Re: (Score:2)
Thank you for the legwork.
I shall honour your work by ... well, RTFP-ing!.
And ... It looks like "we told you to not do that ; this is another way of saying `don't do that`". Where "that" is "using a plaintext with predictable contents.
And that is why, back in the early 1990s, the first Zimmerman distribution of PGP included a suggestion to use an efficient compression algorithm on a message (packet, whatever) before starting encryption ; because that hammers out the re
Re:Huh? (Score:5, Funny)
Also, I think there is a theorem about modern crypto systems that says if you can guess one bit, the rest doesn't get any easier.
Nah, once you guess one bit, the only bit left is zero.
Interesting times (Score:4, Insightful)
There was also an article on Slashdot just over a week ago about a separate advance against RSA.
http://it.slashdot.org/story/13/08/06/2056239/math-advance-suggest-rsa-encryption-could-fall-within-5-years [slashdot.org]
A picture is emerging where not only are the tools available to the layman for protecting information difficult to use, their is a good chance that they also do not offer as much protection as we have long held them to provide.
Re: (Score:2)
their/there, before the gn's jump all over me for a typo ;)
Re:Interesting times (Score:4, Funny)
There, there - They're there.
Re: (Score:3)
FUD (Score:4, Interesting)
Re: (Score:3, Insightful)
With all due respect, "citation needed". The authors of the paper aren't FUDsters spewing soundbites for the media, they are presenting it at the International Symposium on Information Theory before their peers. I can't tell from the link whether the paper has been accepted by a peer-reviewed journal or whether it's still in review, so some skepticism might be called for before uncritically accepting the conclusions, but this is still a far cry from FUD.
I'd like to see something more than just a dismissiv
Re: (Score:2, Interesting)
Re: (Score:3)
This isn't dismissive hand wave. What they discovered is a marginal concern, especially when dealing with on-the-way-out algorithms (e.g. 3DES).
"Dismissive hand wave" refers to your terse dismissal and accusations of FUD while providing nothing more than personal opinion as evidence. If there is a basis for your assertions, prove it with links to actual proof that this is nothing.
Authors are FUDsters not because what they discovered is false, but because they are making huge deal out of it, and some illiterate CIOs within government circles listened and redirected resources to mitigate this non-issue.
You must be in the field, then, and have inside knowledge. You come across as someone who is offended by the behavior of attention seeking scientific peers and are calling them out. Fine. But the MIT research article and the paper it describes don't support your claims
Re: (Score:2)
Re: (Score:2)
So you think describing in incomprehensible math what boils down to a type of vocabulary attack, and then somehow concluding that our RNG isn't good enough (never mind the elephant in the room that your implementation+policy is vulnerable to such attack) is not FUD?
Yes, I don't think it is FUD. I may not think it is earth-shatteringly profound and proof that the sky is falling and cryptography is now broken forever, but reading the actual paper, they don't either.
"Incomprehensible math"? What? It's a math paper, written by mathematicians, presented at a MATH symposium. It's comprehensible to the authors and the audience at the symposium, regardless of whether non-mathematicians can comprehend it. You act as though this were an attempt by hucksters to confuse by g
Re: (Score:2)
Uncertainty = we just don't know how to quantify risks, because Step 3: Entropy!
Doubt = everything we know about cryptography is wrong, because Flawed Example!
I stand on my point that this paper, as far as practical cryptography goes, is FUD. I am willing to consider that it might be viewed differently through the lens of theoretical science.
Re: (Score:2)
It's quite unlikely the authors "are making huge deal out of it". Never, ever confuse the journalist writing about science with the scientist.
Re: (Score:2)
Re: (Score:3)
This is an absurd claim.
There is no such thing as "plaintext matching", you probably thinking about CPA (chosen plain text attack). Things like nonce, CBC and random IV make sure that such matching impossible.
Three words... (Score:2)
Cooty Rats Semen
(If you don't get it, you need to see: http://www.imdb.com/title/tt0105435/ [imdb.com] )
Its all about buying more time! (Score:1)
University of Ireland is gibberish (Score:3, Informative)
It is (as given on the paper) the "National University of Ireland, Maynooth" and NOT simply "University of Ireland". "The constituent universities are for all essential purposes independent universities, except that the degrees and diplomas are those of the National University of Ireland with its seat in Dublin". I'm from Ireland and had no clue WTF "University of Ireland" was going to be and had it not been for the MIT connection would have assumed it was one of those places you send a few dollars to get a "fake" degree. When and if it's truncated you might see "NUI", "NUIM" or "NUI Maynooth".
Re: (Score:2)
You think it sounds confusing? Meh!
It took me about 6 clicks to get to http://www.nuim.ie/ [www.nuim.ie]
Mathematical skill does note require presence at a "major university" (though there is a strong correlation, distorted by (common) mathematical geniuses who really do not give a shit about conventionality. Perelman, (sp?), the recent proposer of a proof of the Something Big Conjecture being a case in point.
Common mistake. (Score:5, Interesting)
I remember reading in an ecology textbook about researchers who wanted to model reforestation after a Mt. St. Helens erupted. They used the average seed dispersion as input to their model, and found that reforestation occured much, much faster.
Turns out the farthest flung seeds take root just as well as the average seed, and they grow and disperse seeds. And the farthest flung of those seeds grow and disperse seeds, compounding the disparity between average and extreme seed dispersion.
Just something to keep in mind when you're working with averages.
Re: (Score:2)
So they forgot to take into account that the median seed had to compete with a bunch of other seeds, while the farthest seed didn't? Sounds like shoddy prediction work to me.
Compression first (Score:2)
Isn't this (one reason) why any good encryption system compresses what it is encrypting first? To maximize the data's entropy?
Re: (Score:2)
http://en.wikipedia.org/wiki/Entropy_(information_theory)#Data_compression [wikipedia.org]
If a compression scheme is lossless—that is, you can always recover the entire original message by decompressing—then a compressed message has the same quantity of information as the original, but communicated in fewer characters. That is, it has more information per character, or a higher entropy. This means a compressed message is more unpredictable, because there is no redundancy. Roughly speaking, Shannon's source coding theorem says that a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message. The entropy of a message multiplied by the length of that message is a measure of how much information the message contains.
Known or chosen plaintext (Score:4, Informative)
How is this in principle different from the known plaintext attacks (https://en.wikipedia.org/wiki/Known-plaintext_attack [wikipedia.org])?
These assume that the attacker knows both the encrypted version of the text and the original it was based on, and tries to glean information from their correlation.
Modern ciphers are made resistant even to chosen plaintext attacks, where the analyst knows the key and can tailor-make pairs of plain- and ciphertext.
Re:Known or chosen plaintext (Score:4, Informative)
Imagine you have an algorithm that generates an n-bit secret key. First, it flips a random bit b. If b = 0, then it just outputs a string of n zeroes as the key. If b = 1, then it outputs n random bits. The entropy of this process is n bits, which seems good, but cryptographically it is terrible because half the time it just uses a fixed key of all zeroes. Instead of Shannon entropy, cryptographers uses a different form called min entropy which is inversely proportional to the most likely event. So in the above case, the min entropy would only be one bit, which properly reflects how bad that algorithm is.
It's late, and I might be missing something, but it doesn't seem like anything that wasn't known before. Particularly, they talk about distributions with high entropy but which are not uniform, and in cryptography you always assume you have uniform randomness. It has been known for quite a while that many things are not even possible without uniform randomness. For instance, it is known that encryption cannot be done without uniform randomness.
Re: (Score:2)
Damn those information therrorists (Score:2)
We'd send our drones after them if they wouldn't hack them and send them back.
Can't they just eliminate the non-uniformity? (Score:1)
they found that slight deviations from perfect uniformity in source files, which seemed trivial in the light of Shannon entropy, suddenly loomed much larger
Okay, but can't they simply apply an xor mask to the plaintext to make it perfectly uniform, and then encrypt the masked version?
For example, let's say it turns out that iterating on the SHA512 function [SHA512(key), SHA512(SHA512(key)), etc.] yields an arbitrarily long xor mask that has perfect uniformity, and is statistically indistinguishable from a random sequence. You then apply that mask to the plaintext before encrypting it to destroy its non-uniformity. Wouldn't that be the fix?
Or is the proble
Re: (Score:1)
The point of this paper is that iterated SHA512, or any other cryptographic operation you care to name, doesn't have perfect uniformity, and those dev
Whole-disk encryption a bad idea? (Score:3)
So what are you saying.... (Score:2)
'Itâ(TM)s still exponentially hard, but itâ(TM)s exponentially easier than we thought,' Duffy says.
So, what, rather than a computer taking until the heat death of the universe to crack my 4096 bit key it will only take until our Sun goes super nova?
brb, generating 8192 bit keys.
$10,000 offer (Score:2)
sekg 1408 drnh @$?" xxth bhg9 douche bag
hjmp llmo 3860 ++%# jjgj mmnm muggle
Easy fix (Score:2)
It is necessary to encrypt twice, using 2 different encryption methods. Then it will be impossible to find one reliable correlation.
Re: (Score:3)
Use 2 different encryption METHODS. (Score:2)
I suggested using 2 different un-related encryption methods. Because the 2nd method is entirely different from the first, MITM does not function. Using 2 different un-related encryption methods protects against other attacks, also.
Meet-in-the-middle applies to using the same encryption method two times, using different keys.
TrueCrypt can use 2 different encryption METHODS. (Score:2)
Re: (Score:2)
Re: (Score:2)
It applies to any two encryption methods. I don't know why you would think it has to be the same cipher twice.
So you're saying that, in Soviet Russia, you use ROT-26 followed with ROT-52 ?
(OK, I'm done beating every single resident bacterial cell in a dead horse to death)
Re: (Score:2)
2 METHODS makes successful attacks unlikely. (Score:2)
My understanding, which may be mistaken, is that MITM attacks, or any kind of attacks, on data that has been encrypted with two or more encryption methods are unlikely to be successful. Since the patterns of encryption are different, finding coincidences is unlikely.
I was unable to find good information. Can you tell me where to find useful research?
idiotic (Score:2)
Except they figured this out just in time for quantum computers to ruin all encryption.
Re:Key Size implications (Score:5, Interesting)
So, can someone clarify for me exactly what the implications of this are? Is this a lowering of the relevant exponent in the exponentially hard problem, meaning you should multiply your key sizes by some factor that perhaps the paper somehow could provide, or is it a constant factor meaning you should extend your keys by a fixed amount?
Either way, this is important news. I expect the details depend on the nature of the data in question, so there aren't easy answers. Its things like this that are the reasons we use key sizes that are significantly larger than could be practically cracked today.
This might be news in mathematical circles, but this has been a known issue in cryptoanalysis circles for years. It's even the basis for the smart card attacks performed by a German group in the mid-90's. Shannon entropy theory is fine for its limited domain, but as soon as you start dealing with encryption-during-transit of values known to the attacker (plus timings and order of sequence), a LOT more has to be done to ensure high entropy of the metainformation too, and Shannon entropy doesn't account for that.
So in properly defined encryption systems, this isn't much of an issue. The problem arises when people shout "we use AES-256" or "we use SSL/TLS 2.0" (which have fine Shannon entropy) and yet handle that encrypted data in a way that exposes it to pattern analysis attack, whether encrypted or not.
Note that this is a separate issue from that of choosing a secure encryption key/keylength in the first place. It has more to do with how you're wrapping the unencrypted data and how random separate unencrypted data sets using the same key are.
The way I've always thought of it is: if the entropy source is truly random, then any meaningful data injected into it will impart a pattern into the randomness. This can be used to identify the data based on patterns discovered in the supposedly random data. Conversely, if the entropy source isn't truly random, it is possible to discover its pattern, extract that from the equation, and what you are left with is the data.
You still have to deal with the secret key in either case, but this makes building that key exponentially easier, given a known cleartext source and a collection of cleartext encrypted samples. The more encrypted samples of the known cleartext you've got, the simpler the decryption becomes.
Disregard... (Score:2, Funny)
Any sentences that starts with, "What if it is we..."
Re: (Score:2, Interesting)
Which god? Zeus? Odin? Quetzacoatl? Given the differences between some people's definitions of what 'god' is, I am unconvinced of the 'all aspects of the one divinity' argument, so before we start playing 'what if' let's establish what you mean when you say 'God' and why we should accord that definition primacy over another.
The thought exercise you pose is little different to any one of the form that posits a state of being where your senses are fooled so that you cannot perceive the true reality - brain-in
Re: (Score:2)
I'm glad to see you read comments the same way you read the bible: skimming through to find the bit you wanted to see, while ignoring the rest which would invalidate your point.
Re: (Score:2)
We are at a point in history and in society where people are using their "beliefs" to further their ends of oppressing people who are not attempting to do harm to anyone. We are at a point where we are expected to "respect" other's beliefs even when those beliefs run directly counter to what can be observed by the naked eye, even when the exercise of those beliefs would cause harm to those in the immediate vicinity.
Like I said, skimming through to find the part which agrees with the argument you wanted to make anyway.