It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
General Staff Reconnaissance Unit (formerly Unit 269 or Unit 262), more commonly known as Sayeret Matkal (Hebrew: סיירת מטכ״ל), is the special reconnaissance unit (sayeret) of Israel's General Staff (matkal). It is considered one of the premier special forces units of Israel.
First and foremost a field intelligence-gathering unit, conducting deep reconnaissance behind enemy lines to obtain strategic intelligence, Sayeret Matkal is also tasked with a wide variety of special operations, including black operations; as well as combat search and rescue, counterterrorism, hostage rescue, HUMINT, irregular warfare, long-range penetration, conducting manhunts, and reconnaissance beyond Israel's borders. The unit is modeled after the British Army's Special Air Service (SAS), taking the unit's motto "Who Dares Wins". The unit is the Israeli equivalent of the SAS. It is directly subordinate to the Special Operations Division of the IDF's Military Intelligence Directorate.
Sayeret Matkal veterans have gone on to achieve high positions in Israel's military and political echelons. Several have become IDF Generals and members of the Knesset. Ehud Barak's career is an example: a draftee in 1959, he later succeeded Unit 101 commando Lt. Meir Har-Zion in becoming Israel's most decorated soldier. While with Sayeret Matkal, Barak led operations Isotope in 1972 and Spring of Youth in 1973. He later advanced in his military career to become the IDF Chief of Staff between 1991 and 1995. In 1999 he became the 10th Prime Minister of Israel.
He attended the Technion – Israel Institute of Technology in Haifa while simultaneously working at IBM's research laboratory in the city. While at IBM, he was responsible for developing the Genesys system, a processor verification tool that is used widely within IBM and in other companies such as Advanced Micro Devices and SGS-Thomson.
Upon receiving a Bachelor of Arts and a Bachelor of Science, summa cum laude, in 1995, he traveled to Cambridge, Massachusetts, to begin graduate studies toward a Ph.D at the Massachusetts Institute of Technology (MIT) in 1996. While there, he and his advisor, Professor F. Thomson Leighton, came up with consistent hashing, an innovative algorithm for optimizing Internet traffic. These algorithms became the basis for Akamai Technologies, which the two founded in 1998. Lewin was the company's chief technology officer and a board member, and achieved great wealth during the height of the Internet boom.
Paul Sagan says that Danny could leave the company to finish his PhD and publish his thesis, but then they'd have to kill him. Everyone else at Akamai is encouraged to complete their academic work, a slew of them at MIT, but Danny - him they'd have to off. He knows too much.
Danny Lewin is an algorithms guy, and at Akamai Technologies, algorithms rule. After years of research, he and his adviser, professor Tom Leighton, have designed a few that solve one of the direst problems holding back growth of the Internet. This spring, Tom and Danny's seven-month-old company launched a service built on these secret formulas.
In January, Akamai began running beta versions of FreeFlow, serving content for ESPN.com, Paramount Pictures, Apple, and other high-volume clients. (Akamai withholds the names of the others, but you can tell if a site is using the service by viewing the page source and looking for akamaitech.net in the URLs. A cursory test reveals "Akamaized" content at Yahoo! and GeoCities.)
ESPN.com and Paramount have been good beta testers - ESPN.com because it requires frequent updates and is sensitive to region as well as time, and Paramount because it delivers a lot of pipe-hogging video. On March 11, while ESPN was covering the first day of NCAA hoops' March Madness, Paramount's Entertainment Tonight Online posted the second Phantom Menace trailer. FreeFlow handled up to 3,000 hits per second for the two sites - 250 million in total, many of them 25-Mbyte downloads of the trailer. But the system never exceeded even 1 percent of its capacity. In fact, as the download frenzy overwhelmed other sites, Akamai picked up the slack. Before long, Akamai became the exclusive distributor of all Phantom Menace QuickTimes, serving both of the official sites, starwars.com and apple.com.
So how does it work? Companies sign up for Akamai's FreeFlow, agreeing to pay according to the amount of their traffic. Then they run a simple utility to modify tags, and the Akamai network takes over. Throughout the site, the system rewrites the URLs of files, changing the links into variables to break the connection between domain and location. On www.apple.com, for example, the link www.apple.com/home/media/menace_640qt4.mov, specifying the 640 x 288 Phantom Menace QuickTime trailer, might be rewritten as a941.akamai.com/7/941/51/256097340036aa/www.apple.com/home/media/menace_640qt4.mov. Under standard protocols, a941.akamaitech.net would refer to a particular machine. But with Akamai's system, the address can resolve to any one of hundreds of servers, depending on current conditions and where you are on the Net. And it can resolve a different way for someone else - or even for you, a few seconds later. (The /7/941/51/256097340036aa in the URL is a fingerprint string used for authentication.) This new method is more complicated, but like modern navigation, it opens new vistas of capacity and commerce.
In some ways, sending information around the traditional Internet resembles human transport, pre-Phoenicia. The Net was originally designed like a series of roads connecting distinct sources of content. Different servers, physical hardware, specialized in their own individual data domains. As first conceived, an address like nasa.gov would always correspond to dedicated servers located at a NASA facility. When you visited www.ksc.nasa.gov to see a shuttle launch, you connected to NASA's servers at Kennedy Space Center, just as you traveled to Tivoli for travertine marble instead of picking it up at your local port. When you ran a site, your servers and only your servers delivered its content.
This routing system worked fine for years, but as users move to fatter pipes, like DSL and broadband cable, and as event-driven supersites emerge, the protocols tying information to location cause a bottleneck. Back when The Starr Report was posted, Congress' servers couldn't keep up with hungry surfers. When Victoria's Secret ran its Super Bowl ad last February, similar lusts went unsated. The Heaven's Gate site in 1997 quickly followed its cult members into oblivion. And when The Phantom Menace trailers hit the Web this spring, a couple of sites distributing them went down.
This is the "hot spot" problem: When too many people visit a site, the excessive load heats it up like an overloaded circuit and causes a meltdown. Just as something on the Net gets interesting, access to it fails.
For more time-critical applications, the stakes are higher. When the stock market lurches and online traders go berserk, brokerage sites can hardly afford to buckle. In retail, slow responses will send impatient customers clicking over to the competition. Users may have Pentium IIIs and ISDN lines, but when a site can't keep up with demand, they feel like they're on a slow dialup. And users on relatively remote parts of the network - even tech hubs like Singapore - often suffer slow responses, not just during peak traffic.
ISPs address this problem by adding connections, expanding capacity, and running server farms to host client sites on many machines, but this still leaves content clustered in one place on the network. Publishers can mirror sites at multiple hosting companies, helping to spread out traffic, but this means duplicating everything everywhere, even the files no one wants. A third remedy, caching, temporarily stores copies of popular files on servers closer to the user, but out of the original site's control. Naturally, site publishers don't like this - it delivers stale content, preserves errors, and skews usage stats. In other words, massive landlock.
So in 1998, with their new algorithms in hand, Tom Leighton and Danny Lewin found themselves facing a sort of manifest destiny. The Web's largest sites were straining to meet demand - and frequently failing. Most needed better traffic handling, a way to cool down hot spots and speed content delivery overall. And Tom and Danny had conceived a solution, a grand-scale alternative to the Net's routing system.
According to the recorded FAA information, when the hijackers attacked one of the flight attendants, Lewin rose to protect her and prevent the terrorists from entering the cockpit. After he was stabbed, he bled to death on the floor, and two other flight attendants and the captain were murdered. The hijackers took over the cockpit and diverted the plane on its murderous path to New York.
“I’m sure he acted out of pure instinct,” said Jonathan.
“To this day, those of us who knew him well can’t figure out how only five terrorists managed to overpower him,” said Leighton less than a year after the attack
his friends said they didn't think being stabbed would stop him
Flight attendants on the plane who contacted airline officials from the plane reported that Lewin's throat was slashed, probably by the terrorist sitting behind him