It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
what is a General Purpose Computer, and why is that important? The idea of a Universal Machine, to most people, seems grandiose. Like something from science fiction. Perhaps "universal" overstates the case. Computers cannot provide light, energy, food, medicine or transport matter. Yet, they can help with all those things. They do so by a process known as computing, which transforms symbols, usually taken to be numbers. We can express problems as questions in these symbols, and get answers.
The universal computer was a theoretical creation of mathematicians going back to Al Kwarismi Musa, eponymous creator of the "Algorithm" in the 8th century AlKhwarizmi, and most famously by Charles Babbage and then Alan Turing. Of those, only Turing lived to see a working computer created 2. In the 1950s at Harvard, John von Neumann designed a workable microprocessor using the latest silicon chips, and digital computers became reality.
It was Ada Lovelace who offered the first prophetic insights into the social power of programming as a language for creativity. She imagined art, music, and thinking machines built from pure code, by anyone who could understand mathematics. With a general purpose computer anybody can create an application program. That is their extraordinarily progressive, enabling power. Apps, being made of invisible words, out of pure language, are like poems or songs. Anybody can make and share a poem, a recipe, or an idea like a computer program. This is the beauty and passion at the heart of the hacker culture.
General purpose computers enable unparalleled freedom and opportunity to those wanting to build common resources. The story of computing after about 2000 is the tale of how commercial interests tried to sabotage the prevalence of general purpose computers.
Smartphones occupy a state between general purpose computing and appliances which could be seen as a degenerate condition. Software engineers have condemned, for half a century, the poor modularity, chaotic coupling, lack of cohesion, side effects and data leaks that are the ugly symptoms of the technological chimera. As we now know, smartphones, being neither general purpose computers over which the user has authority, nor functionally stable appliances, bring a cavalcade of security holes, opaque behaviours, backdoors and other faults typical of machinery built according to ad-hoc design and a celebration of perversely tangled complexity.
Smartphones were originally designed around powerful general purpose computers. The cost of general use microprocessors had fallen so far that it was more economical to take an off-the-shelf computer and add a telephone to it than to design a sophisticated handset as an appliance (a so-called ASIC) from scratch. But the profit margins on hardware are small. Smartphones needed to become appliance-like platforms for selling apps and content. To do so it was necessary to cripple their general purpose capabilities, lest users retain too much control.
This crippling process occurs for several reasons ostensibly sold as "security". Security of the user from bad hackers is one perspective. Security of the vendor and carrier from the user is the other. We have shifted from valuing the former to having the latter imposed on us. In this sense mobile cybersecurity is a zero sum game, what the vendor gains the user loses. In order to secure vendor rights to extraction of rent, the users freedoms must be taken away.
Recently, developer communities have been busy policing language in order to expunge the word 'slave' from software source code. Meanwhile, slavery is precisely what a lot of software is itself enabling.
Under the euphemism of 'software as service', each device and application has become a satellite of its manufacturers' network, intruding into the owner's personal and digital space. Even in open source software, such as the sound editor Audacity, developers have become so entitled, lazy and unable to ship a working product that they alienated their userbase by foisting "telemetry" (a euphemism for undeclared or non-consensual data extraction and updates) on the program.
The second pillar of slavery is encrypted links that benefit the vendor. Encryption hides the meaning of communications. Normally we consider encryption to be a benefit, such as when we want to talk privately. But it can be turned to nefarious ends if the user does not hold the key. This same design is used in malware. Encryption is turned against the user when it is used to send secret messages to your phone that control or change its behaviour, and you are unable to know about them.
The manufacturer will tell you that these 'features' are for your protection. But once an encrypted link, to which you have no key, is established between your computer and a manufacturer's server, you relinquish all control over the application and most likely your whole device. There is no way for you, or a security expert, to verify its good behaviour. All trust is given over to the manufacturer. Without irony this is called Trusted Computing when it is embedded into the hardware so that you cannot even delete it from your "own" device. You have no real ownership of such devices, other than physical possession. Think of them as rented appliances.
The third mechanism for enslavement is open-ended contracts. Traditionally, a contract comprises established legal steps such as invitation, offer, acceptance and so forth. The written part of a contract that most of us think of, the part that is 'signed', is an agreement to some fixed, accepted terms of exchange. Modern technology contracts are nothing like this. For thirty years corporations have been pushing the boundaries of contract law to the point they are unrecognisable as 'contracts'.
AMD, Intel and NVIDIA all make money by ultimately selling someone a chip. ARM’s revenue comes entirely from IP licensing. It’s up to ARM’s licensees/partners/customers to actually build and sell the chip. ARM’s revenue structure is understandably very different than what we’re used to.
There are two amounts that all ARM licensees have to pay: an upfront license fee, and a royalty. There are a bunch of other adders with things like support, but for the purposes of our discussions we’ll focus on these big two.
Everyone pays an upfront license fee and everyone pays a royalty. The amount of these two is what varies depending on the type of license.
Microsoft has released a security update that has patched a backdoor in Windows RT operating system [that] allowed users to install non-Redmond approved operating systems like Linux and Android on Windows RT tablets. This vulnerability in ARM-powered, locked-down Windows devices was left by Redmond programmers during the development process. Exploiting this flaw, one was able to boot operating systems of his/her choice, including Android or GNU/Linux.
Over the last decade, Intel has been including a tiny little microcontroller inside their CPUs. This microcontroller is connected to everything, and can shuttle data between your hard drive and your network adapter. It’s always on, even when the rest of your computer is off, and with the right software, you can wake it up over a network connection. Parts of this spy chip were included in the silicon at the behest of the NSA. In short, if you were designing a piece of hardware to spy on everyone using an Intel-branded computer, you would come up with something like the Intel Managment Engine.
Intel’s Management Engine is only a small part of a collection of tools, hardware, and software hidden deep inside some the latest Intel CPUs. These chips and software first appeared in the early 2000s as Trusted Platform Modules. These small crypto chips formed the root of ‘trust’ on a computer. If the TPM could be trusted, the entire computer could be trusted. Then came Active Management Technology, a set of embedded processors for Ethernet controllers. The idea behind this system was to allow for provisioning of laptops in corporate environments. Over the years, a few more bits of hardware were added to CPUs. This was the Intel Management Engine, a small system that was connected to every peripheral in a computer. The Intel ME is connected to the network interface, and it’s connected to storage. The Intel ME is still on, even when your computer is off. Theoretically, if you type on a keyboard connected to a powered-down computer, the Intel ME can send those keypresses off to servers unknown.
How do you turn the entire thing off?
Unfortunately, you can’t. A computer without valid ME firmware shuts the computer off after thirty minutes.
The Unified Extensible Firmware Interface (UEFI)[1] is a publicly available specification that defines a software interface between an operating system and platform firmware. UEFI replaces the legacy Basic Input/Output System (BIOS) firmware interface originally present in all IBM PC-compatible personal computers,[2][3] with most UEFI firmware implementations providing support for legacy BIOS services. UEFI can support remote diagnostics and repair of computers, even with no operating system installed.[
Since UEFI's first version (2.0, released in 2006), it has supported the use of digital signatures to ensure that UEFI drivers and UEFI programs are not tampered with. In version 2.2, released in 2008, digital signature support was extended so that operating system loaders—the pieces of code supplied by operating system developers to actually load and start an operating system—could also be signed.
The digital signature mechanism uses standard public key infrastructure technology. The UEFI firmware stores one or more trusted certificates. Signed software (whether it be a driver, a UEFI program, or an operating system loader) must have a signature that can be traced back to one of these trusted certificates. If there is no signature at all, if the signature is faulty, or if the signature does not correspond to any of the certificates, the system will refuse to boot.
what certificates are in that list? Per the Microsoft documentation for running Windows on the device, there should be at least two:
Microsoft Windows Production PCA 2011. The Windows bootloader (bootmgr.efi) is signed using this, so this is what allows Windows (and Windows PE) to run.
Microsoft Corporation UEFI CA 2011. This one is used by Microsoft to sign non-Microsoft UEFI boot loaders, such as those used to load Linux or other operating systems. Technically, it’s described as “optional” but it would be unusual to find a device that doesn’t include it. (Windows RT devices, if you remember those, did not include this cert or any others, so as a result, it only ran Windows RT. We shall see what certs are included on Windows 10x devices…)
Who controls what can run on the device when Secure Boot is enabled? (As long as you can turn off Secure Boot, you always have the final authority.) As you can see from the above, it’s Microsoft and the OEMs. And since you probably don’t want to have to have an OEM- or model-specific boot loader, that effectively means it’s Microsoft. Only Windows binaries get signed with the “Windows Production” certificate, but anyone can get code signed using the “UEFI” certificate — if you pass the Microsoft requirements. You can get an idea of what those requirements entail from this blog post. It’s a non-trivial process because you are effectively having your code reviewed by Microsoft.
If you want to parse PE binaries, go right ahead. If Red Hat wants to deep-throat Microsoft, that's *your* issue. That has nothing what-so-ever to do with the kernel I maintain. It's trivial for you guys to have a signing machine that parses the PE binary, verifies the signatures, and signs the resulting keys with your own key. You already wrote the code, for chrissake, it's in that #ing pull request.
Why should *I* care? Why should the kernel care about some idiotic "we only sign PE binaries" stupidity? We support X.509, which is the standard for signing.
Do this in user land on a trusted machine. There is zero excuse for doing it in the kernel.
Linus
Trusted Platform Module (TPM, also known as ISO/IEC 11889) is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys.
The concerns include the abuse of remote validation of software (where the manufacturer—and not the user who owns the computer system—decides what software is allowed to run) and possible ways to follow actions taken by the user being recorded in a database, in a manner that is completely undetectable to the user.
originally posted by: MykeNukem
a reply to: dug88
Interesting breakdown of the evolution of PC architecture.
Basically, they had to figure out a way to get around that nasty Open Source OS and software.
By adding Remote Access via the firmware level, presto.
Also, by cryptographically sealing all processes, only their "key" can unlocks it all. So, hardware can't interact without being "Trusted".
I was in IT back at Y2K.
Was the good ole days before IOT when we could still unplug.
Great Thread. SnF
originally posted by: Gothmog
originally posted by: MykeNukem
a reply to: dug88
Interesting breakdown of the evolution of PC architecture.
Basically, they had to figure out a way to get around that nasty Open Source OS and software.
By adding Remote Access via the firmware level, presto.
Also, by cryptographically sealing all processes, only their "key" can unlocks it all. So, hardware can't interact without being "Trusted".
I was in IT back at Y2K.
Was the good ole days before IOT when we could still unplug.
Great Thread. SnF
1) They already had access since early 80s (per some) .
2) "By adding Remote Access via the firmware level, presto." Where ? You can update the BIOS/UEFI over the network without an OS . Easy thing to turn that off , though.
3) "Also, by cryptographically sealing all processes, only their "key" can unlocks it all. So, hardware can't interact without being "Trusted". " Trusted computing (or technically , TPM doesn't work that way .
4) "I was in IT back at Y2K." I was "IT" back in the early 80s . And called in to ride Y2K through .
1. Ok. They still have it.
2. If it's enabled, it's enabled. It's enabled by default on most business units, no?
3. It basically works that way. If you're not "Trusted" you will at the very least lose access to certain services.
Not trying to see who's the bestest.
originally posted by: Gothmog
a reply to: MykeNukem
1. Ok. They still have it.
That is debatable whether it ever was.
2. If it's enabled, it's enabled. It's enabled by default on most business units, no?
No . That is not correct.
3. It basically works that way. If you're not "Trusted" you will at the very least lose access to certain services.
No . I am using PCs with TPM enabled and TPM disabled. No difference whatsoever .
The reason for the uproar is the developer version of Windows 11 cannot be installed without TPM . TPM 2.0 came out in 2014 . If one has a system that cannot be TPM 2.0 enabled , one does not need to be trying to run Windows 11
No more "LEGACY" that is sooooooo 2000
Not trying to see who's the bestest.
Me neither .
Yet , I am attempting to say "I know" , as I HAVE to know .
Part of the job for the last 30 years.
Trusted Platform Module (TPM, also known as ISO/IEC 11889) is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys.
2. From what I've seen Network Boot is enabled by default as the 3rd or $th option in the Boot List. Maybe that's changed. That would give you Remote Access for sure.
3. I was assuming we are talking about "enabled", which if the system detects a hardware change, will possible desiable certain driver functions until updated
originally posted by: Gothmog
a reply to: MykeNukem
2. From what I've seen Network Boot is enabled by default as the 3rd or $th option in the Boot List. Maybe that's changed. That would give you Remote Access for sure.
Network boot is default enabled in the BIOS/UEFI and has been for a long, long time on most every PC device .(Way down in the list )
This is for SANs , or anyone booting from a server (PXE Boot) . Which has to be specifically set up or doesn't work at all. No use at all for the normal user .
It does come in handy if one is setting up 100s or more machines at one time . Set up a boot server loaded with an image , and one can do all at one go .
3. I was assuming we are talking about "enabled", which if the system detects a hardware change, will possible desiable certain driver functions until updated
No. You assumed wrong .(At least as of now and in the immediate future)
originally posted by: Havamal
Sorry. My expertise is with larger and more complex computer systems. I always considered windows as a "toy" system for home users. Still is.
windows, at is base, is very primitive. Compare to IBM systems in th 1960s and they are just getting to a virtual machine. iBM 360 1964.
Ahem. But let the kids have fun with their toys.