SSDs the quiet future of HDDs?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

krille
Posts: 357
Joined: Thu Mar 23, 2006 4:56 am
Location: Sweden

SSDs the quiet future of HDDs?

Post by krille » Sat Apr 29, 2006 3:37 am

SSD = solid-state disk drive of course. I was wondering if anyone knew anything more about these? I suppose they're as quiet as any other solid-state flash-memory based devices (memory cards) or Gigabyte's non-solid-state (liquid-state? :lol:) I-RAM, however I don't know for sure... just the same a very interesting article.
http://www.digitalworldtokyo.com/2006/0 ... h_only.php

I imagine performance should be quite high on these SSDs as well with the obvious disadvantage(s) being: Capacity and price (capacity/price ratio if you will). But in time I hope this technology should become a reality for the average silent PC enthusiast. :D
The SSD technology has three major benefits over hard drives, said Yun. The first is that data access is faster. This could be seen when the SSD-based laptop was booted up alongside the same model machine with a standard hard drive. The desktop appeared on the screen of the SSD laptop in about 18 seconds while the hard drive-based computer took about 31 seconds to reach the same point.

The second advantage comes in durability. Because there are no moving parts in the SSD it is much better at withstanding shock and unlikely that data will be lost if the laptop is dropped. The third major advantage is that it works silently, said Yun.

But for all these advantages there is a major hurdle that needs to be overcome before SSD can reach mass market — price. Flash memory costs around $30 per gigabyte so the memory needed for the 32GB drive works out to about $960, before any other costs are taken into account.
What do you guys think?

~ Kris

Sorry if this has already been posted before, since it's over a month old.

Felger Carbon
Posts: 2049
Joined: Thu Dec 15, 2005 11:06 am
Location: Klamath Falls, OR

Re: SSDs the quiet future of HDDs?

Post by Felger Carbon » Sat Apr 29, 2006 10:35 am

krille wrote:What do you guys think?
I think that both iron-oxide rotating memory and semiconductor memory will continue to increase capacity for given cost, and at about the same rate.

Twelve years ago, were there any hard drives for home computers with as much as the 4GB capacity of the Gigabyte drive? :)

Today you can buy 300GB of totating iron rust for ~$100. In not too many years, SSDs of similar capacity will be available, for a similar price. Nobody will buy them, because they'll be into dozens of terabytes by then. :?

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Sat Apr 29, 2006 10:58 am

Flash still has a long way to go before it becomes practical for the average user. Unless there is some kind of revolutionary development in solid-state storage, I doubt they'll ever replace hard drives. I could see maybe 10 years down the road, though, computers coming with a small flash drive for OS, system files, etc., and a large secondary non-SS drive for multimedia storage.

Has anybody heard anything recently about Samsung's hybrid drives? Has a lot of the advantages of a pure-SSD, but MUCH more affordable.

~El~Jefe~
Friend of SPCR
Posts: 2887
Joined: Mon Feb 28, 2005 4:21 pm
Location: New York City zzzz
Contact:

Post by ~El~Jefe~ » Sat Apr 29, 2006 11:41 am

flash doesnt last as long as a rotating disk.

I truly believe that the rotating disk has 10 years easy more of lifespan.

i agree, when you can get a 10 terabyte spinning disk, you really dont want to try and go flash.

We should have 40 gig flash drives though as boot drives. this would be neato. boot and game boot, etc, something that never gets written to, just goes super fast to direct with the bus. Like a pci-e special port that goes fullspeed.

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Sat Apr 29, 2006 12:14 pm

frostedflakes wrote:Has anybody heard anything recently about Samsung's hybrid drives? Has a lot of the advantages of a pure-SSD, but MUCH more affordable.
I feel about this the same way I feel about hard drive caches and SATA's feature where the hard drive can reorder requests. This is all stuff which should be done IN SOFTWARE, adding ZERO HARDWARE COST.

Seriously.

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Sat Apr 29, 2006 12:34 pm

What do you mean by in software? Caching hard drive data in existing memory (i.e. RAM), as opposed to adding more to the hard drive?

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sat Apr 29, 2006 1:34 pm

~El~Jefe~ wrote:flash doesnt last as long as a rotating disk.

I truly believe that the rotating disk has 10 years easy more of lifespan.

i agree, when you can get a 10 terabyte spinning disk, you really dont want to try and go flash.

We should have 40 gig flash drives though as boot drives. this would be neato. boot and game boot, etc, something that never gets written to, just goes super fast to direct with the bus. Like a pci-e special port that goes fullspeed.
According to this guy from Maxtor http://forums.storagereview.net/index.p ... t&p=201267
the error rate for disks goes up significantly at around 5 years.

We shouldn't even have "boot drives" in the normal sense. My old Atari ST had its entire OS in ROM, just turn the thing on and the desktop is running. Hard to believe that with two decades of technological "progress" we've only managed to *slow boot times down* by a factor of 30 or so, along with expose ourselves to viruses of all kinds. (With an OS in ROM, obviously viruses cannot alter system code, so they can't propagate themselves as easily.)

The PCMCIA standard allowed for memory devices that supported two interfaces - the IDE/ATAPI interface, as well as a raw memory interface. With the memory interface you could just execute code directly off the card, essentially the same idea as a plug-in ROM cartridge. If you want to solve the slow boot-time problem, this is the direction to explore. Booting a computer by loading file after file off a rotating media filesystem is always going to be slow. The way to speed it up is to snapshot the OS memory once bootup is completed, and stash the snapshot onto your nonvolatile memory device. The next time the system boots, you can either run the OS directly off the external memory, or copy it verbatim into RAM and run it there. The memory-to-memory copy will still be quicker than paging data off disk, doing symbol resolution and address relocation, etc... When you install patches or OS upgrades onto the filesystem, you'll have to set a flag telling the system to generate a new snapshot on next boot, no big deal. You can even get clever and set a write-protect bit on the snapshot, so that viruses etc. can't inject themselves into the OS without your knowledge. Trivially easy stuff, that the little-guy innovators were doing 20 years ago, that Microsoft still doesn't understand today. And so, the state of the art in computer science is retarded even further...

mellon
Posts: 105
Joined: Thu Apr 21, 2005 12:17 am
Location: Helsinki, Finland

Post by mellon » Sat Apr 29, 2006 2:00 pm

SSD is technologically already here, you can get 128GB in 2,5" form factor (a bit thicker than usual though) with IDE interface, 5 year warranty and pretty nice read/write speeds: http://www.m-sys.com/site/en-US/Product ... ra_ATA.htm Higher capacities can be found in the SCSI lineup.

The cost is unbelievably high though. Maybe mostly due to market differentiation, I guess it is more profitable to sell that kind of drives to military customers for extremely high prices than it would be to slash the price by 50% and try to approach some middle-market like automotive applications.

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Sun Apr 30, 2006 2:46 am

frostedflakes wrote:What do you mean by in software? Caching hard drive data in existing memory (i.e. RAM), as opposed to adding more to the hard drive?
Yes. The basic concept of saving up changes in solid state memory so that the laptop drive is only spun up briefly is good, but it seems terribly wasteful to me to add specialized hardware to the hard drive to accomplish this. The concept can and should be accomplished by a change to OS software, so every user benefits at no additional hardware cost.
highlandsun wrote:We shouldn't even have "boot drives" in the normal sense. My old Atari ST had its entire OS in ROM, just turn the thing on and the desktop is running.
Ah yes, the last great OS to do so, and the only GUI to do so, AFAIK. The Amiga went halfway--about half of the OS was in ROM, but the other half was disk based. It had good boot times compared to the Mac, which had its GUI OS entirely on disk. But it didn't boot nearly as fast as DOS, which wasn't a GUI and barely even deserved to be called an OS.
highlandsun wrote: Booting a computer by loading file after file off a rotating media filesystem is always going to be slow. The way to speed it up is to snapshot the OS memory once bootup is completed, and stash the snapshot onto your nonvolatile memory device. The next time the system boots, you can either run the OS directly off the external memory, or copy it verbatim into RAM and run it there.
This is sort of the way Knoppix based linux distributions work, when installed to boot off of a USB thumbdrive (or when installed via "poor man's install" onto an CF card via IDE->CF adapter). However, USB and CF cards are very slow, so these distributions store the OS in compressed form. That speeds things up considerably, as well as requiring much less space.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Mon May 01, 2006 12:04 am

mellon wrote:SSD is technologically already here, you can get 128GB in 2,5" form factor (a bit thicker than usual though) with IDE interface, 5 year warranty and pretty nice read/write speeds: http://www.m-sys.com/site/en-US/Product ... ra_ATA.htm Higher capacities can be found in the SCSI lineup.

The cost is unbelievably high though. Maybe mostly due to market differentiation, I guess it is more profitable to sell that kind of drives to military customers for extremely high prices than it would be to slash the price by 50% and try to approach some middle-market like automotive applications.
Well, a Mil-spec product is also going to use more expensive components than automotive spec or consumer grade. The interesting thing is, there generally are complete parallel lines of components for all 3 grades, with commensurate price breaks, but while the overall designs for a consumer drive of equal capacity would be simpler (control firmware doesn't need to be as fault-tolerant) and much cheaper to build (raw components are cheaper), no one is doing so. Samsung's announcements in these areas (16GB 1.8") are still just stupid toys. m-sys.com are probably not set up for high-volume production, but they could easily run the same assembly lines, turning out consumer grade and automotive grade assemblies of their existing product designs and sell into those markets. I guess you're right that there's not enough profit in it for them though, supporting the distribution chain for higher volumes probably makes it too much trouble.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Mon May 01, 2006 12:15 am

IsaacKuo wrote:
frostedflakes wrote:What do you mean by in software? Caching hard drive data in existing memory (i.e. RAM), as opposed to adding more to the hard drive?
Yes. The basic concept of saving up changes in solid state memory so that the laptop drive is only spun up briefly is good, but it seems terribly wasteful to me to add specialized hardware to the hard drive to accomplish this. The concept can and should be accomplished by a change to OS software, so every user benefits at no additional hardware cost.
Well, yes and no. Unix is traditionally very good about this, with the BSD 4.3 FFS only flushing dirty pages to disk once every 30 seconds. Windows is very bad about this, with Windows XP flushing 1/4 of all dirty pages to disk every single second, guaranteeing that your drive has to stay spun up as long as any tasks are doing anything on the machine. But put in perspective, the Windows 1 second interval makes sense - to avoid too much data loss from the OS spontaneously crashing, from people tripping over power cords, etc., in most situations you don't want Windows caching 30 seconds worth of changes. The potential for expensive loss is just too great, especially since OS-controlled caches in system RAM are purely volatile.

Ideally I think this cache should be in the hard disk controller (using Flash) so it can survive past unstable OS's, and be OS independent as well. Then every user still benefits, transparently. The alternative, for systems with multiple disk controllers, is to provide an explicit chunk of Flash and be forced to develop support for it for every OS of interest. (Well, at least today, there aren't that many interesting OSs left.)
IsaacKuo wrote: This is sort of the way Knoppix based linux distributions work, when installed to boot off of a USB thumbdrive (or when installed via "poor man's install" onto an CF card via IDE->CF adapter). However, USB and CF cards are very slow, so these distributions store the OS in compressed form. That speeds things up considerably, as well as requiring much less space.
Makes sense; now they should just make this the standard boot mechanism regardless of boot media or standalone configuration...

Jordan
Posts: 557
Joined: Wed Apr 28, 2004 8:21 pm
Location: Scotland, UK

Post by Jordan » Wed May 24, 2006 11:34 pm

Sony laptop to ship with 32Gb Samsung SSD

http://www.pcpro.co.uk/news/87699/sony- ... aptop.html

clocker
Posts: 20
Joined: Sat Dec 25, 2004 7:34 pm

Post by clocker » Thu May 25, 2006 2:01 am

highlandsun wrote: Well, a Mil-spec product is also going to use more expensive components than automotive spec or consumer grade.
[slight threadjack]This is not necessarily true at all.
In most cases what differentiates a "mil-spec" part from a normal consumer grade part is not the inherent quality/performance of the part itself, rather it's the incredible paper trail that documents that the supplied part is what it claims to be.
Assuming no intent to deceive (which is of course, a danger in any large project) a mil-spec part may actually be inferior to a cutting-edge consumer part simply because the newer, better component exceeds the contract specs or the vendor cannot/will not supply the required documentation.

This is how we end up with $300 toilet seats in goverment projects...they don't last any longer or fit your ass any better but there is an 8000 page file that documents every stage of the manufacture/delivery/install of the part.

The procurement department's ass is well covered, so to speak.[/threadjack]

inti
Posts: 118
Joined: Thu Feb 10, 2005 7:09 am
Location: here

Post by inti » Thu May 25, 2006 3:39 am

highlandsun wrote:The way to speed it up is to snapshot the OS memory once bootup is completed, and stash the snapshot onto your nonvolatile memory device. The next time the system boots, you can either run the OS directly off the external memory, or copy it verbatim into RAM and run it there. The memory-to-memory copy will still be quicker than paging data off disk, doing symbol resolution and address relocation, etc... When you install patches or OS upgrades onto the filesystem, you'll have to set a flag telling the system to generate a new snapshot on next boot, no big deal. You can even get clever and set a write-protect bit on the snapshot, so that viruses etc. can't inject themselves into the OS without your knowledge. Trivially easy stuff, that the little-guy innovators were doing 20 years ago, that Microsoft still doesn't understand today. And so, the state of the art in computer science is retarded even further...
This was a very interesting post. But I think it can be done at present, sort of: essentially that is how Hibernate works. The RAM image file hiberfil.sys is loaded from the boot drive into RAM by NTLDR. I've not tried this yet but clearly you can put NTLDR onto a CD-ROM (as Windows is able to boot itself from a CD-ROM); maybe you could also put hiberfil.sys onto the boot CD-ROM? In that case you should equally be able to put the same file structure onto a solid-state drive.

Engine
Posts: 118
Joined: Thu May 18, 2006 10:07 am

Post by Engine » Thu May 25, 2006 4:59 am

highlandsun wrote:We shouldn't even have "boot drives" in the normal sense. My old Atari ST had its entire OS in ROM, just turn the thing on and the desktop is running. Hard to believe that with two decades of technological "progress" we've only managed to *slow boot times down* by a factor of 30 or so, along with expose ourselves to viruses of all kinds.
It's not really hard to believe once you realize that the operating systems of today are vastly more than 30 times more complex - and capable of doing vastly more that 30 times more - than the OS on the Atari ST. This isn't even a logical comparison.
highlandsun wrote:(With an OS in ROM, obviously viruses cannot alter system code, so they can't propagate themselves as easily.)
You also cannot alter the system code, meaning your OS is unpatchable without a hardware update. Today's operating systems are simply too complex to be able to guarantee that the first version will be absolutely flawless. People can hack on Microsoft all they want, but this is true of *nix and every Mac OS. And since every PC will have to have some portion of storage which can be altered, leaving the OS untouchable simply moves the "target" to programs, which are as suseptable to alteration as the OS, if not more so.
highlandsun wrote:You can even get clever and set a write-protect bit on the snapshot, so that viruses etc. can't inject themselves into the OS without your knowledge. Trivially easy stuff, that the little-guy innovators were doing 20 years ago, that Microsoft still doesn't understand today. And so, the state of the art in computer science is retarded even further...
The "little-guy innovators" didn't have thousands of people writing viruses to infect their code. The "little-guy innovators" didn't have to write an OS that would run on hundreds of thousands of combinations of hardware. There's simply no comparison between computing 20 years ago - when viruses could only be transmitted, I remind you, by floppy and files you might suck of a BBS - and computing today. This is as fallacious as the Atari ST comparison. We can yearn for the "good old days," but they're simply not useful in discerning direct courses of action today; they are only useful references: the true solutions will have to be much more complex.

I know Microsoft is a favorite target, but logic simply doesn't support the evidence of their drastic retardation. I don't agree with everything they do, but it's simply not a case of, "MS bad, little guy good." Oversimplification doesn't render problems into solutions, it simply makes for satisfying crucifixion.

You could get the security benefits of the ROM-OS - although not the boot speed - by simply running your OS from CD. But it won't matter, because virus writers simply move their target to applications. Computer security will never be "solved;" it will continually be a give-and-take between hackers and security professionals, because anything that can be altered intentionally - and computers are useless without some capacity for alteration - can be altered unintentionally. Even if every operation of the OS had to be confirmed by the user, the emphasis of virus writers would simply move to social engineering. "Your computer is infested with spyware! Click here to fix the problem!"

I wish there were simple solutions. I wish you could put the perfect OS on a ROM chip and leave it at that, buying OS updates on chip every so often, but it wouldn't actually fix anything. Instead, we need steady, manageable, incremental solutions, and constant vigilance and attention.

Oh, on-topic: the instant I can buy an SSD with enough room to run my OS and my major applications for less than, say, US$300, I'll do so. Then my archival storage goes to a file server in the basement, and there's one more source of noise gone. Power loss is certainly a concern - "Oh. Whoops. I forgot to replace the SSD battery. There goes my damn Windows." - but that's really a flaw of the user, and not the hardware.

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Thu May 25, 2006 6:14 am

Engine wrote:
highlandsun wrote:We shouldn't even have "boot drives" in the normal sense. My old Atari ST had its entire OS in ROM, just turn the thing on and the desktop is running. Hard to believe that with two decades of technological "progress" we've only managed to *slow boot times down* by a factor of 30 or so, along with expose ourselves to viruses of all kinds.
It's not really hard to believe once you realize that the operating systems of today are vastly more than 30 times more complex - and capable of doing vastly more that 30 times more - than the OS on the Atari ST. This isn't even a logical comparison.
But there's a good question--does a modern OS NEED to be vastly more than 30 times more complex? In particular, the original Amiga OS had most of the features most users would ever need except for a TCP/IP stack--and it was only about twice the size of the Atari ST OS. We're still talking about less than 1 megabyte of code for the core OS and utilities.
highlandsun wrote:(With an OS in ROM, obviously viruses cannot alter system code, so they can't propagate themselves as easily.)
You also cannot alter the system code, meaning your OS is unpatchable without a hardware update.
This was a problem with old fashioned ROM chips. Of course, today we use cheaper flash, which can be rewritten. There are myriad devices which run directly off flash memory, of course.
You could get the security benefits of the ROM-OS - although not the boot speed - by simply running your OS from CD. But it won't matter, because virus writers simply move their target to applications.
The applications can ALSO be put on CD. The basic idea behind Knoppix and other live-CD OS's is to package a fully functional OS with applications on a single CD or DVD. Some of them are even small enough to completely load in RAM so the disc may be ejected and the optical drive used for other purposes.

But today's linux based live-CDs are inelegant hacks. Like Windows, Linux has always been designed around a hard drive installation. Thus, it expects many directories to be rewriteable in normal operation. To get a LiveCD linux to work, there must be a hack to work around this expectation. As far as the OS is concerned, it's not "really" running directly off the CD. This sort of hack isn't really reverseable, so you can't just "snapshot" a Linux OS to CDROM and then expect it to work without extra effort.

What would really be ideal would be if hard drives had had physical write protect switches from the start--like floppy discs. Floppy discs always had a hardware level physical write protect option. As such, a lot of floppy based software kept this in mind. For example, the Amiga OS was originally floppy based. It was expected that you'd write protect the OS disk, except when you wanted to make basic changes. Thus, the Amiga OS was always "snapshotable".

If hard drives had had physical write protect switches from the start, we'd already be accustomed to the idea of leaving the OS partition locked by default and only manually switching it to be writeable when making base changes. Operating systems would naturally be designed so that executables were all grouped together within the write protected partition.
Computer security will never be "solved;" it will continually be a give-and-take between hackers and security professionals, because anything that can be altered intentionally - and computers are useless without some capacity for alteration - can be altered unintentionally. Even if every operation of the OS had to be confirmed by the user, the emphasis of virus writers would simply move to social engineering. "Your computer is infested with spyware! Click here to fix the problem!"
There are lots of useful electronics devices with limited capacity for alteration. Videogame consoles do rather well running directly off of CD. Cell phones are getting more and more capable, even though the user can't typically do much to alter the software.
I wish there were simple solutions. I wish you could put the perfect OS on a ROM chip and leave it at that, buying OS updates on chip every so often, but it wouldn't actually fix anything. Instead, we need steady, manageable, incremental solutions, and constant vigilance and attention.
It depends on your definition of "simple". The ultimate solution to the home computer security problem could very well be the elimination of the home computer altogether. Depending on your point of view, this solution is either very "simple" or very "radical".

I've felt for many years that cell phone functionality is going to eventually expand to such a degree that most people have neither need nor desire for a personal computer. They've already almost got enough computing power. The missing components are the user interface and non-crippleware software. The user interface side needs wireless TV/Monitor and wireless keyboard support. The software side just needs to be freed from the fear of RIAA/MPAA thuggery.

Engine
Posts: 118
Joined: Thu May 18, 2006 10:07 am

Post by Engine » Thu May 25, 2006 6:38 am

IsaacKuo wrote:But there's a good question--does a modern OS NEED to be vastly more than 30 times more complex? In particular, the original Amiga OS had most of the features most users would ever need except for a TCP/IP stack--and it was only about twice the size of the Atari ST OS. We're still talking about less than 1 megabyte of code for the core OS and utilities.
While I believe a modern OS - for the PC; Mac is a different story - do need to be 30 times more complex than that of the Amiga or Atari, I don't believe they /need/ to be as complex as they are. Microsoft is to be lauded for making an OS that works on most everything and that my grandmother can install and use, but we all know their OSes have made comprimises because of it. It's kind of like the difference between an Ariel Atom and a Chevy Cobalt.

Bloat sucks. I hate bloat. There is more bloat than there has to be in most OSes, even considering the miriad devices and activities the OS has to be capable of operating with. OSes are bigger and less efficient than they could be, but not by as much as we enthusiasts usually think.
IsaacKuo wrote:
Engine wrote:
highlandsun wrote:(With an OS in ROM, obviously viruses cannot alter system code, so they can't propagate themselves as easily.)
You also cannot alter the system code, meaning your OS is unpatchable without a hardware update.
This was a problem with old fashioned ROM chips. Of course, today we use cheaper flash, which can be rewritten. There are myriad devices which run directly off flash memory, of course.
Absolutely, but using flash versus using ROM means you eliminate the security reasons for putting the OS on something solid state. It's the inverse of running the OS from a CD: you get the performance, but not the security.
IsaacKuo wrote:The applications can ALSO be put on CD. The basic idea behind Knoppix and other live-CD OS's is to package a fully functional OS with applications on a single CD or DVD. Some of them are even small enough to completely load in RAM so the disc may be ejected and the optical drive used for other purposes.
And in some circumstances, this is possible, even desirable. Running all the everything off CD - well, let's say DVD, since you can't fit that much on CD anymore - is really great...except that the first executable anything you put on writeable media becomes capable of comprimising the system as soon as anything's put in RAM. But the success of thumb-drive, floppy, and CD-based OS/application kits shows that something like this is possible...but don't expect to have it run on every piece of hardware available, or do all the whacky things people want their computers to do nowadays. For enthusiasts, it's a great solution, and you could even completely write and do such a thing yourself. But for someone who wants doors and a roof on their car, someone has to make the Chevy Cobalt. For you and I, the Ariel Atom will do quite nicely.
IsaacKuo wrote:What would really be ideal would be if hard drives had had physical write protect switches from the start--like floppy discs.
Hey, you know, mine used to. I used to have jumpers on my drives that write-protected them. I'd never noticed they weren't on modern drives until recently, when I needed one. Does anyone know what happened to the write-protect jumper on hard disks? Presumably, not something people needed particularly often.
IsaacKuo wrote:There are lots of useful electronics devices with limited capacity for alteration. Videogame consoles do rather well running directly off of CD. Cell phones are getting more and more capable, even though the user can't typically do much to alter the software.
But each of those devices is designed to do a fairly specific task on fairly specific hardware. And even then, when the door is opened even a crack...well, I'm sure you've seen some of the newer cellphone viruses. Gods, what a mess it will be when the world is wireless and hackable. [I don't mind messes, myself; I'm rather looking forward to it.]
IsaacKuo wrote:
Engine wrote:I wish there were simple solutions. I wish you could put the perfect OS on a ROM chip and leave it at that, buying OS updates on chip every so often, but it wouldn't actually fix anything. Instead, we need steady, manageable, incremental solutions, and constant vigilance and attention.
It depends on your definition of "simple". The ultimate solution to the home computer security problem could very well be the elimination of the home computer altogether. Depending on your point of view, this solution is either very "simple" or very "radical".
At the very least, I think some radical reinterpretation of the function of personal computers will be necessary, whether it's elimination and replacement, or just a rethink of their basic methodology. These solutions are radical, but could also be "simple." Like you say, it's semantic.
IsaacKuo wrote:I've felt for many years that cell phone functionality is going to eventually expand to such a degree that most people have neither need nor desire for a personal computer. They've already almost got enough computing power. The missing components are the user interface and non-crippleware software. The user interface side needs wireless TV/Monitor and wireless keyboard support. The software side just needs to be freed from the fear of RIAA/MPAA thuggery.
Ah, but once cell phones are required to do all the things PCs do today - like run all software, on various different hardware configurations, be interconnectable to the degree users demand, and accessible enough for anyone to use conveniently, well then all we'll have is a personal computer that's very small...it won't be more secure. It is the function of the device that determines its requirements, not its form factor.

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Thu May 25, 2006 7:29 am

Engine wrote:
IsaacKuo wrote:But there's a good question--does a modern OS NEED to be vastly more than 30 times more complex? In particular, the original Amiga OS had most of the features most users would ever need except for a TCP/IP stack--and it was only about twice the size of the Atari ST OS. We're still talking about less than 1 megabyte of code for the core OS and utilities.
While I believe a modern OS - for the PC; Mac is a different story - do need to be 30 times more complex than that of the Amiga or Atari, I don't believe they /need/ to be as complex as they are. Microsoft is to be lauded for making an OS that works on most everything and that my grandmother can install and use, but we all know their OSes have made comprimises because of it. It's kind of like the difference between an Ariel Atom and a Chevy Cobalt.

Bloat sucks. I hate bloat. There is more bloat than there has to be in most OSes, even considering the miriad devices and activities the OS has to be capable of operating with. OSes are bigger and less efficient than they could be, but not by as much as we enthusiasts usually think.
I disagree. My experience with software development is that bloat exists mainly because it can. It's all a matter of diminishing returns. It takes time and effort to eliminate bloat. Sometimes, the benefits of such optimization are worth it. Other times, that time and effort would be better spent developing new features or debugging.

Is there a point in optimizing an OS to fit in 2 megabytes today? No, not really. If you can get the OS down to, say, 10% of the cheapest hard drive's capacity than surely that's good enough, right?
IsaacKuo wrote:
Engine wrote: You also cannot alter the system code, meaning your OS is unpatchable without a hardware update.
This was a problem with old fashioned ROM chips. Of course, today we use cheaper flash, which can be rewritten. There are myriad devices which run directly off flash memory, of course.
Absolutely, but using flash versus using ROM means you eliminate the security reasons for putting the OS on something solid state. It's the inverse of running the OS from a CD: you get the performance, but not the security.
Well, flash is typically slow also. It's used because it's compact, low power, and reliable.

The security can be taken care of in various ways. With most flash based devices, the ways to perform a flash update are extremely limited and inherently difficult to exploit. I'd personally prefer a hardware write protect toggle switch somewhere, but I can see that for most of these devices it'd be superfluous.
IsaacKuo wrote:The applications can ALSO be put on CD. The basic idea behind Knoppix and other live-CD OS's is to package a fully functional OS with applications on a single CD or DVD. Some of them are even small enough to completely load in RAM so the disc may be ejected and the optical drive used for other purposes.
And in some circumstances, this is possible, even desirable. Running all the everything off CD - well, let's say DVD, since you can't fit that much on CD anymore - is really great...except that the first executable anything you put on writeable media becomes capable of comprimising the system as soon as anything's put in RAM.
No, it isn't. That's the whole point behind hardware memory protection. Only the system kernel has unrestricted access to all RAM. All other processes will segmentation fault if they attempt to access RAM outside of their own allowed memory space.

Also, there are types of executable code which inherently have limited capabilities--Java, javascript, flash/shockwave, and .net being noteworthy examples.
But the success of thumb-drive, floppy, and CD-based OS/application kits shows that something like this is possible...but don't expect to have it run on every piece of hardware available, or do all the whacky things people want their computers to do nowadays.
I find that Knoppix based liveCDs run on more hardware out-of-box than any flavor of Windows ever will even after installing all available drivers. Older version of Windows have difficulty with modern hardware; newer version of Windows have difficulty with older hardware.
IsaacKuo wrote:What would really be ideal would be if hard drives had had physical write protect switches from the start--like floppy discs.
Hey, you know, mine used to. I used to have jumpers on my drives that write-protected them. I'd never noticed they weren't on modern drives until recently, when I needed one. Does anyone know what happened to the write-protect jumper on hard disks? Presumably, not something people needed particularly often.
How many people are ever going to go through the effort of accessing a write protect jumper? No, I mean something that would plausibly actually be used--like a toggle switch on the computer's front panel.

Oh well, it's a moot point. What's done is done.
IsaacKuo wrote:There are lots of useful electronics devices with limited capacity for alteration. Videogame consoles do rather well running directly off of CD. Cell phones are getting more and more capable, even though the user can't typically do much to alter the software.
But each of those devices is designed to do a fairly specific task on fairly specific hardware. And even then, when the door is opened even a crack...well, I'm sure you've seen some of the newer cellphone viruses. Gods, what a mess it will be when the world is wireless and hackable.
The things that most people want to do with a computer is just a relatively small collection of fairly specific tasks. Heck, just something that could do nothing but browse the web could have been highly successful--were it not for the IE-Netscape war of constant WWW bloatification. The only thing that kept internet appliances from taking off was the fact that they didn't work with a large and ever increasing fraction of web sites.
IsaacKuo wrote:I've felt for many years that cell phone functionality is going to eventually expand to such a degree that most people have neither need nor desire for a personal computer. They've already almost got enough computing power. The missing components are the user interface and non-crippleware software. The user interface side needs wireless TV/Monitor and wireless keyboard support. The software side just needs to be freed from the fear of RIAA/MPAA thuggery.
Ah, but once cell phones are required to do all the things PCs do today - like run all software, on various different hardware configurations, be interconnectable to the degree users demand, and accessible enough for anyone to use conveniently, well then all we'll have is a personal computer that's very small...it won't be more secure. It is the function of the device that determines its requirements, not its form factor.
[/quote]

The function of the device does not determine its security. The series of historical accidents in the lineage of its development determines its security.

It's something of a myth that Linux is more secure than Windows because *nix was designed from the start with security in mind. It wasn't. Unix was originally designed to be a simple single user OS with performance and simplicity as its goals rather than any sort of security. The original Unix was more like DOS than anything else. As Unix grew, it gained multi-tasking, multi-user, and networking capabilities, and had all sorts of security problems! So why is the design of *nix more secure than the design of Windows? More or less, it's because of time. *nix was multi-user and networked long before DOS/Windows. This gave *nix software developers a lot more time to fix all the security problems. By the time Linux was developed, *nix had mostly worked out the kinks.

With cell phones, there's a perfect opportunity to avoid all of the pitfalls by either starting from scratch or latching onto the mature *nix software legacy. They don't need any sort of Windows compatability.

Engine
Posts: 118
Joined: Thu May 18, 2006 10:07 am

Post by Engine » Thu May 25, 2006 8:07 am

IsaacKuo wrote:I disagree. My experience with software development is that bloat exists mainly because it can. It's all a matter of diminishing returns. It takes time and effort to eliminate bloat. Sometimes, the benefits of such optimization are worth it. Other times, that time and effort would be better spent developing new features or debugging.

Is there a point in optimizing an OS to fit in 2 megabytes today? No, not really. If you can get the OS down to, say, 10% of the cheapest hard drive's capacity than surely that's good enough, right?
Absolutely. I do not disagree with any of that. However, most of what people consider "bloat" in today's Windows - for instance - is because some user, somewhere, needs some feature that the person complaining of bloat doesn't need. And for every feature, every hardware configuration, every capacity, inefficiency expands in a nonlinear fashion.
IsaacKuo wrote:
Engine wrote:And in some circumstances, this is possible, even desirable. Running all the everything off CD - well, let's say DVD, since you can't fit that much on CD anymore - is really great...except that the first executable anything you put on writeable media becomes capable of comprimising the system as soon as anything's put in RAM.
No, it isn't. That's the whole point behind hardware memory protection. Only the system kernel has unrestricted access to all RAM. All other processes will segmentation fault if they attempt to access RAM outside of their own allowed memory space.
That doesn't even remotely prevent viruses or security flaws/attacks. The OS can be completely extraneous to the implementation of security attacks. That's why even running your OS on a ROM chip won't prevent security breaches: because if you can't alter the OS, you simply alter applications.
IsaacKuo wrote:Also, there are types of executable code which inherently have limited capabilities--Java, javascript, flash/shockwave, and .net being noteworthy examples.
...yes, but those limited capabilities have not prevented exploits of any of those types of executable code.
IsaacKuo wrote:I find that Knoppix based liveCDs run on more hardware out-of-box than any flavor of Windows ever will even after installing all available drivers. Older version of Windows have difficulty with modern hardware; newer version of Windows have difficulty with older hardware.
Um. Anyway, I don't really want to get into a Knoppix versus Windows debate, as to which will run on more hardware with more convenience and do more tasks more accessibly for more people. I think the answer is clear, and you probably do, too, although we might not agree on the answer. I don't think either of us can add anything substantive to the issue.

Suffice to say, I certainly agree a completely bootable OS/application distribution on read-only media is possible, and done on, say, CD or DVD, even desirable [ROM chips have certain benefits over optical media, but replacement cost when patching time comes around is prohibitive]. However, such a solution does not, and never will, eliminate security concerns, because something, somewhere, in the computer must be capable of semi-permanently writing new data, or the machine is useless for the function of a personal computer. For the functions of todays PCs, some vulnerability is absolutely necessary.

It's like a house. I could build a house no one could get into, but what use is that? Once you build a door, it doesn't matter how good the lock is, someone will be able to get in without the original key. So locksmiths and thieves play a constant gave of one-upmanship. This is natural, necessary, and inevitable.
IsaacKuo wrote:
Engine wrote:Hey, you know, mine used to. I used to have jumpers on my drives that write-protected them. I'd never noticed they weren't on modern drives until recently, when I needed one. Does anyone know what happened to the write-protect jumper on hard disks? Presumably, not something people needed particularly often.
How many people are ever going to go through the effort of accessing a write protect jumper? No, I mean something that would plausibly actually be used--like a toggle switch on the computer's front panel.
Well, anything that can be run from a jumper can have a switch routed to it; it's simply a question of a switch inline between the jumper connections, although I see your point as to functionality being much of the reason people didn't commonly use those jumpers - although I still never understood why it wasn't embraced by people who wanted an unalterable OS.

A SSD with a hardware write-protect switch would certainly be possible, but ultimately, it just doesn't matter; once you open the door, someone can club you over the head and step inside.
IsaacKuo wrote:The things that most people want to do with a computer is just a relatively small collection of fairly specific tasks.
Ah, but that's only the individual. One person only does just so many things with a computer, and does it on a fairly specific piece of hardware - unless you're changing hardware constantly, which isn't, you know, unheard-of. But writing an OS for one person isn't particularly profitable, although it's certainly possible. Taken together, the mass of people who use an operating system use many more than "a relatively small collection of fairly specific tasks," and thus we have, by necessity, complex operating systems. Noncommercial or specific-need OSes can afford to be bare and simple - they're the Ariel Atoms of the computing world - but any mass-market OS will have to be complex, because too many different people want to do too many different things with too many different pieces of hardware. Linux and the Mac OSes have shown that there is some market in pandering to a smaller group, but...well, there's a reason Microsoft makes so much more money than those other people, and it's certainly not the efficiency or security of their software: it's useability, compatibility, and commonality.
IsaacKuo wrote:The function of the device does not determine its security. The series of historical accidents in the lineage of its development determines its security.
Let's be honest: both do. The fact that someone has to be able to get into a house means that it must have a door, which determines its maximum security. The series of historical accidents in the lineage of locking mechanisms will determine its functional security, which may or may not be its maximum security.
IsaacKuo wrote:It's something of a myth that Linux is more secure than Windows because *nix was designed from the start with security in mind. So why is the design of *nix more secure than the design of Windows? More or less, it's because of time. *nix was multi-user and networked long before DOS/Windows.
I believe that's part of it. I also believe part of it is that there are orders of magnitude more users of Windows than *nix. I also believe a fairly significant portion of it is that during the early years, MS was so obsessed with the ideals of interconnectivity that they lost sight of security as a priority. I don't think something as complex as the comparison between *nix security and Windows security can be said to be a single thing.
IsaacKuo wrote:With cell phones, there's a perfect opportunity to avoid all of the pitfalls by either starting from scratch or latching onto the mature *nix software legacy. They don't need any sort of Windows compatability.
Of course they do. Why? Because 18 trillion people use Windows, and if you want to ever sell a device that will replace their computers with your device, you're going to need to make it compatible with the device they have. Oh, damn, "legacy." These are the problems MS faces every product cycle, and while it's easy for us to say, yes, ideally, there's no reason for a brand new product to be able to communicate in any way with the old one, that only makes the product /work:/ it's doesn't make it /useful./

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Thu May 25, 2006 9:15 am

Engine wrote:
IsaacKuo wrote:
Engine wrote:And in some circumstances, this is possible, even desirable. Running all the everything off CD - well, let's say DVD, since you can't fit that much on CD anymore - is really great...except that the first executable anything you put on writeable media becomes capable of comprimising the system as soon as anything's put in RAM.
No, it isn't. That's the whole point behind hardware memory protection. Only the system kernel has unrestricted access to all RAM. All other processes will segmentation fault if they attempt to access RAM outside of their own allowed memory space.
That doesn't even remotely prevent viruses or security flaws/attacks. The OS can be completely extraneous to the implementation of security attacks. That's why even running your OS on a ROM chip won't prevent security breaches: because if you can't alter the OS, you simply alter applications.
You are simply completely wrong. You can't just alter applications in RAM. You need a "rootkit" exploit to do this (something that gets you unlimited access within the computer). Now, it may be that it's easier to get root access in Windows, but you still need it first.

Ever since the 68030 and 386, hardware memory protection has been more or less standard. The main purpose for it wasn't so much security from malicious hacks, but rather the desire for system stability. Without hardware memory protection, any bug in any task could compromise the entire system because any executable could write data into any memory location. Hardware memory protection eliminated this problem by sandboxing any task's memory access abilities to a limited range of addresses. The only way outside this sandbox was with a special interrupt which inherently handed control over to kernel code.

Ever since hardware memory protection was implemented, root level attacks required exploiting bugs within the system kernel itself.
IsaacKuo wrote:Also, there are types of executable code which inherently have limited capabilities--Java, javascript, flash/shockwave, and .net being noteworthy examples.
...yes, but those limited capabilities have not prevented exploits of any of those types of executable code.
Yes, they have. Any of the above stands in stark contrast to, say, Microsoft Word scripts.
IsaacKuo wrote:I find that Knoppix based liveCDs run on more hardware out-of-box than any flavor of Windows ever will even after installing all available drivers.
Um. Anyway, I don't really want to get into a Knoppix versus Windows debate, as to which will run on more hardware with more convenience and do more tasks more accessibly for more people. I think the answer is clear, and you probably do, too, although we might not agree on the answer. I don't think either of us can add anything substantive to the issue.
I've actually used both Knoppix and Windows extensively, on a wide range of hardware. I'll take my first hand experience over your speculation.
However, such a solution does not, and never will, eliminate security concerns, because something, somewhere, in the computer must be capable of semi-permanently writing new data, or the machine is useless for the function of a personal computer. For the functions of todays PCs, some vulnerability is absolutely necessary.
Knoppix, as well as other liveCDs, do in fact have built in capabilities to write personal data and settings--either within a subfolder on a local hard drive or a thumbdrive or both. They are actually meant to be used, not merely be technology demonstrators.
It's like a house. I could build a house no one could get into, but what use is that? Once you build a door, it doesn't matter how good the lock is, someone will be able to get in without the original key. So locksmiths and thieves play a constant gave of one-upmanship. This is natural, necessary, and inevitable.
If Windows is like a house, *nix is like a hotel. You don't just give everyone full access to everything. You give appropriate access levels on an individual/group basis.
IsaacKuo wrote:It's something of a myth that Linux is more secure than Windows because *nix was designed from the start with security in mind. So why is the design of *nix more secure than the design of Windows? More or less, it's because of time. *nix was multi-user and networked long before DOS/Windows.
I believe that's part of it. I also believe part of it is that there are orders of magnitude more users of Windows than *nix. I also believe a fairly significant portion of it is that during the early years, MS was so obsessed with the ideals of interconnectivity that they lost sight of security as a priority. I don't think something as complex as the comparison between *nix security and Windows security can be said to be a single thing.
What "early years" are you talking about? MS was obsessed with the non-networked standalone "multi-media" PC as the standard model up until well after Windows 95. They had no interest in interconnectivity or interoperability--in fact they went out of their way to deter interoperability where their interests dictated.
IsaacKuo wrote:With cell phones, there's a perfect opportunity to avoid all of the pitfalls by either starting from scratch or latching onto the mature *nix software legacy. They don't need any sort of Windows compatability.
Of course they do. Why? Because 18 trillion people use Windows, and if you want to ever sell a device that will replace their computers with your device, you're going to need to make it compatible with the device they have.
Of course they don't. Cell phones aren't Windows compatable and no one cares about that. All they care about is their functionality. If it can play music and videos, and it works--then great.
These are the problems MS faces every product cycle, and while it's easy for us to say, yes, ideally, there's no reason for a brand new product to be able to communicate in any way with the old one, that only makes the product /work:/ it's doesn't make it /useful./
There's a huge difference between being incompatable with Windows and not being able to communicate in any way with Windows. Macs and Linux computers can use the same text files, the same music and video files, and most other files. This is why Microsoft hates interoperability--it makes it so that consumers can have a choice to switch to other vendors.

Even so, it seems that cell phones are doing well even with extremely limited interoperability! For example, cell phones feature text messaging--a redundant technology compared to e-mail and various IM systems. Rather than incorporating seemless interoperability with an existing computer standard, cell phones went with an independent parallel implementation. Similarly, sending/receiving pictures is also done with a parallel implementation. Also video and music.

Of course, I'd prefer to see more interoperability and implementations using existing standards. That doesn't seem to be the current direction, though.

Engine
Posts: 118
Joined: Thu May 18, 2006 10:07 am

Post by Engine » Thu May 25, 2006 9:52 am

IsaacKuo wrote:You are simply completely wrong. You can't just alter applications in RAM.
I'm sorry; I'm not being clear. I don't necessarily mean altering a running application, I mean altering those aspects of the application which are alterable in RAM. Like the OS, preventing access to a running application means you can't alter it, but some aspect of the application must be alterable, or your entire computer is unchangeable.
IsaacKuo wrote:
Engine wrote:
IsaacKuo wrote:Also, there are types of executable code which inherently have limited capabilities--Java, javascript, flash/shockwave, and .net being noteworthy examples.
...yes, but those limited capabilities have not prevented exploits of any of those types of executable code.
Yes, they have. Any of the above stands in stark contrast to, say, Microsoft Word scripts.
I didn't say, "Java is worse than Word," I said that limiting the capabilities of those applications has not prevented exploits of them. Which it hasn't. It has made it more difficult, certainly, but has not prevented it, which is what we're talking about.
IsaacKuo wrote:I've actually used both Knoppix and Windows extensively, on a wide range of hardware. I'll take my first hand experience over your speculation.
Please explain how you think my experience is speculative and not personal. Anyway, you're welcome to take your opinion over mine; I wouldn't expect anything else, as I said previously. We disagree. I hope that's okay with you.
IsaacKuo wrote:
Engine wrote:However, such a solution does not, and never will, eliminate security concerns, because something, somewhere, in the computer must be capable of semi-permanently writing new data, or the machine is useless for the function of a personal computer. For the functions of todays PCs, some vulnerability is absolutely necessary.
Knoppix, as well as other liveCDs, do in fact have built in capabilities to write personal data and settings--either within a subfolder on a local hard drive or a thumbdrive or both. They are actually meant to be used, not merely be technology demonstrators.
Yes. I didn't dispute that. But Knoppix is, thus, vulnerable. Are you saying Knoppix is absolutely 100 percent unhackable in any way, that no security exploit could ever be written for Knoppix? Certainly not.
IsaacKuo wrote:If Windows is like a house, *nix is like a hotel. You don't just give everyone full access to everything. You give appropriate access levels on an individual/group basis.
Which Windows also does. Unfortunately, recent versions of Windows have tried to make access easier for users, and as a concession to accessibility, given every guest the master key to the house. This is remarkably stupid. It's an error they've rethought in Vista, but...well, it's a little late now.

*nix being like a hotel makes it harder for someone to obtain or utilize the master key. In no way does it make it impossible.
IsaacKuo wrote:What "early years" are you talking about? MS was obsessed with the non-networked standalone "multi-media" PC as the standard model up until well after Windows 95. They had no interest in interconnectivity or interoperability--in fact they went out of their way to deter interoperability where their interests dictated.
In this context, the "early years" would be pre-98, and the obsession didn't fully form until midway through that product cycle.
IsaacKuo wrote:Of course they don't. Cell phones aren't Windows compatable and no one cares about that.
...what cell phone do you use?
IsaacKuo wrote:If it can play music and videos, and it works--then great.
Music and video which...right, they often download from their Windows PCs. Moreover, we weren't talking about the current crop of cell phones, we were talking about your notion of cellphones replacing the PC. And in that context, interoperability is essential from a marketing standpoint.

inti
Posts: 118
Joined: Thu Feb 10, 2005 7:09 am
Location: here

Post by inti » Thu May 25, 2006 10:24 am

Wow! That's a lot of posting in a few hours - slow afternoon at work, guys?

My views on the interesting off topic conversation:
Windows could be a lot less vulnerable to security threats if (a) it were obvious to a user which are operating system files and processes and which are not; (b) if operating system, firmware and third party settings were not all mixed in the same places (the registry, the System32 folder etc). Seems simply bad design. But until Windows Vista, Microsoft has not had security at the forefront of their design requirements. My friend is one of the team at Microsoft who have recently been rewriting every library with security in mind, and apparently it is a major job.

Back on topic, the post I made above about hiberfil.sys probably will not work: at least, it will probably crash your system if there have been changes to the filesystem since the hiberfil.sys image was made - certainly if programs have been installed or uninstalled, maybe even if there have been changes to the registry.

Engine
Posts: 118
Joined: Thu May 18, 2006 10:07 am

Post by Engine » Thu May 25, 2006 10:45 am

inti wrote:Wow! That's a lot of posting in a few hours - slow afternoon at work, guys?
Apparently. I've been on the board for, like, a week, and I'm already embroiled in a volumous off-topic debate with no visible means of resolution. It's like moving to a new town and screwing the first person you meet: not the best establishment of reputation. So I'm going to stop, before everyone thinks I'm a whore. I mean, I am, but who wants people to know that? ;)

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Thu May 25, 2006 10:49 am

Engine wrote:I'm sorry; I'm not being clear. I don't necessarily mean altering a running application, I mean altering those aspects of the application which are alterable in RAM. Like the OS, preventing access to a running application means you can't alter it, but some aspect of the application must be alterable, or your entire computer is unchangeable.
For almost all applications, the only part which needs to be alterable is the data you're working on. Most applications don't even need any sort of scripting capability, much less the root-level access and auto-run capability Microsoft decided Word and Outlook needed.

Once upon a time, "e-mail virus" was just a laughable hoax. There was neither any way to incorporate executable code into an e-mail, nor was there ever any reason to do so. That was before Microsoft pointlessly added that "feature" to Outlook.
I didn't say, "Java is worse than Word," I said that limiting the capabilities of those applications has not prevented exploits of them. Which it hasn't. It has made it more difficult, certainly, but has not prevented it, which is what we're talking about.
Maybe it's what you're talking about. It seems that you consider any sort of exploit to be a failure. I consider the degree of an exploit to be an important consideration.

For example, consider traditional e-mail. Back in the old days, before even MIME attachments, e-mail could only consist of ASCII text. This prevented any sort of "e-mail worm" exploits because there simply wasn't any mechanism for them to work. OTOH, this didn't stop spamming exploits or e-mail fraud. So in a sense, the standard e-mail protocol did NOT prevent exploits altogether.
IsaacKuo wrote:I've actually used both Knoppix and Windows extensively, on a wide range of hardware. I'll take my first hand experience over your speculation.
Please explain how you think my experience is speculative and not personal. Anyway, you're welcome to take your opinion over mine; I wouldn't expect anything else, as I said previously. We disagree. I hope that's okay with you.
I deduce that you haven't used Knoppix or you haven't used it much because the evidence you presented as a rebuttal was just some old Linux hardware compatability web page.

Assuming this deduction is correct, then your experience is indeed speculative. One thing I've learned countless times is that regardless of the OS, hardware compatability/incompatability is not something you can theorize about. It's something you have to experience first hand (an experience usually accompanied by unexpected frustration and a desire to inflict physical violence upon the computer hardware in question).
Knoppix, as well as other liveCDs, do in fact have built in capabilities to write personal data and settings--either within a subfolder on a local hard drive or a thumbdrive or both. They are actually meant to be used, not merely be technology demonstrators.
Yes. I didn't dispute that. But Knoppix is, thus, vulnerable. Are you saying Knoppix is absolutely 100 percent unhackable in any way, that no security exploit could ever be written for Knoppix? Certainly not.
It's highly likely that for any given version of Knoppix, there exists some exploitable code somewhere in the many applications. Therefore, it's not 100 percent unhackable.

However, it's also highly unlikely that any of these exploits have anything to do with the ability to save personal data and settings on a drive. The only data stored is files within the /home/knoppix user directory and some specific ASCII text configuration files. Executable files aren't typically stored in a user's home directory, nor does anything automatically execute any files within the home directory.

It's kind of like the "My Documents" folder in Windows, except that several significant Microsoft applications are actually designed stupidly enough to auto-execute code within some types of documents (most famously Microsoft Word).

No, the exploits in Knoppix will be more generic linux exploits, like some sort of buffer over-run exploit with some vulnerable video codec bug, or web browser vulnerability.
IsaacKuo wrote:Of course they don't. Cell phones aren't Windows compatable and no one cares about that.
...what cell phone do you use?
I use an old Kyocera phone with very basic features. Most of my friends/family/acquaintances use more trendy cameraphones.
IsaacKuo wrote:If it can play music and videos, and it works--then great.
Music and video which...right, they often download from their Windows PCs.
Only among the computer saavy, and often not even then. The cell phone companies haven't gone out of their way to make things easy to use with computers. It's much easier just to use the cell phone provider's media service. For those who aren't computer saavy it's the only option anyway.
Moreover, we weren't talking about the current crop of cell phones, we were talking about your notion of cellphones replacing the PC. And in that context, interoperability is essential from a marketing standpoint.
The current trend is proprietary services with limited computer integration. And who can blame them? A proprietary pay service is an extra revenue stream, while easy downloading of pirated media from a home computer isn't.

I'm personally holding out for a combo cell phone/portable USB drive/mp3/video player. At a reasonable price. I personally have no interest in proprietary pay media until they offer the content I desire (mainly Japanese music and videos). But, given the current trends and the current success of the proprietary offerings, I expect I've got a long wait ahead.

Engine
Posts: 118
Joined: Thu May 18, 2006 10:07 am

Post by Engine » Fri May 26, 2006 4:34 pm

mellon wrote:SSD is technologically already here, you can get 128GB in 2,5" form factor (a bit thicker than usual though) with IDE interface, 5 year warranty and pretty nice read/write speeds: http://www.m-sys.com/site/en-US/Product ... ra_ATA.htm Higher capacities can be found in the SCSI lineup.

The cost is unbelievably high though.
In case anyone was curious what Mellon meant by "unbelievably," I recently contacted Tri-M, one of M-System's distributors, for pricing. The 128GB commercial temperature [read: not "extreme temperature"] SSD is a paltry US$19,278. US$8,256 for the 64GB version, and US$$4,613 for the 32GB version.

Now, these are per-unit prices, and not bulk prices, so maybe if we all went in on an order together, we could get the per-unit price down to a manageable thousand or so for the 32GB version. :lol:

Well, it was a nice dream for me, anyway. C'mon, Samsung!

Silverking
Posts: 2
Joined: Sat May 27, 2006 3:22 pm
Location: San Diego, CA
Contact:

Post by Silverking » Sat May 27, 2006 3:29 pm

I say just wait for this to be developed into mainstream products.

http://www.drexel.edu/univrel/dateline/ ... 0060508-01
:shock: :shock: :shock:
and maybe you can get a rebate by sending in all of your useless hdds to be recycled. 8)

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sun May 28, 2006 1:31 am

Engine wrote:
highlandsun wrote:We shouldn't even have "boot drives" in the normal sense. My old Atari ST had its entire OS in ROM, just turn the thing on and the desktop is running. Hard to believe that with two decades of technological "progress" we've only managed to *slow boot times down* by a factor of 30 or so, along with expose ourselves to viruses of all kinds.
It's not really hard to believe once you realize that the operating systems of today are vastly more than 30 times more complex - and capable of doing vastly more that 30 times more - than the OS on the Atari ST. This isn't even a logical comparison.
No. Kernels are not 30 times more complex today than they were 20 years ago. The Atari had a GUI in ROM too, but I don't consider anything above the kernel to be "the OS." Microsoft's philosophy of cramming everything into the kernel, making everything run with SYSTEM privileges, is just f#cking stupid.
Engine wrote:
highlandsun wrote:(With an OS in ROM, obviously viruses cannot alter system code, so they can't propagate themselves as easily.)
You also cannot alter the system code, meaning your OS is unpatchable without a hardware update. Today's operating systems are simply too complex to be able to guarantee that the first version will be absolutely flawless. People can hack on Microsoft all they want, but this is true of *nix and every Mac OS. And since every PC will have to have some portion of storage which can be altered, leaving the OS untouchable simply moves the "target" to programs, which are as suseptable to alteration as the OS, if not more so.
No. In fact I wrote about this 12 years ago
SCSI disks with write protect jumpers
Since Unix has a rational filesystem, separating system files, executable files, and data files, it's actually quite easy to boot up and run Unix from read-only filesystems. It's also quite easy to manage the integrity of the image, toggling back to write mode to install updates that you choose to install, and then returning to read-only mode. I did this research in the context of protecting Tripwire checksum databases, but there's been plenty of precedent for running secure, read-only systems. MS-Windows, as another poster already pointed out, makes the same type of security painfully difficult to achieve, because it encourages software developers to throw everything - apps, OS modules, and user data - onto one big C: drive. Until recently you couldn't try to use mount points or anything else to rationalize the filesystem either, and even now mountpoint mapping is nowhere near as usable/useful as under Unix.
Engine wrote:
highlandsun wrote:You can even get clever and set a write-protect bit on the snapshot, so that viruses etc. can't inject themselves into the OS without your knowledge. Trivially easy stuff, that the little-guy innovators were doing 20 years ago, that Microsoft still doesn't understand today. And so, the state of the art in computer science is retarded even further...
The "little-guy innovators" didn't have thousands of people writing viruses to infect their code. The "little-guy innovators" didn't have to write an OS that would run on hundreds of thousands of combinations of hardware. There's simply no comparison between computing 20 years ago - when viruses could only be transmitted, I remind you, by floppy and files you might suck of a BBS - and computing today. This is as fallacious as the Atari ST comparison. We can yearn for the "good old days," but they're simply not useful in discerning direct courses of action today; they are only useful references: the true solutions will have to be much more complex.

I know Microsoft is a favorite target, but logic simply doesn't support the evidence of their drastic retardation. I don't agree with everything they do, but it's simply not a case of, "MS bad, little guy good." Oversimplification doesn't render problems into solutions, it simply makes for satisfying crucifixion.

You could get the security benefits of the ROM-OS - although not the boot speed - by simply running your OS from CD. But it won't matter, because virus writers simply move their target to applications. Computer security will never be "solved;" it will continually be a give-and-take between hackers and security professionals, because anything that can be altered intentionally - and computers are useless without some capacity for alteration - can be altered unintentionally. Even if every operation of the OS had to be confirmed by the user, the emphasis of virus writers would simply move to social engineering. "Your computer is infested with spyware! Click here to fix the problem!"

I wish there were simple solutions. I wish you could put the perfect OS on a ROM chip and leave it at that, buying OS updates on chip every so often, but it wouldn't actually fix anything. Instead, we need steady, manageable, incremental solutions, and constant vigilance and attention.
You call it "oversimplification" - that reflects a mentality raised on the Microsoft way, unable to see a different way. I call the Microsoft way "overcomplication" because I've been developing and running secure Unix systems for more than a decade, and it *is* that simple. Just like von Neumann CPU architecture - code is code, data is data. There is no reason to keep code and data together, there is no reason to keep them in the same kind of RAM, or even adjacent. Violate the data all you like, you can't break my code, you can't inject self-replicating code into my code, you can't insert password-stealing trojans into my code.
Engine wrote: Oh, on-topic: the instant I can buy an SSD with enough room to run my OS and my major applications for less than, say, US$300, I'll do so. Then my archival storage goes to a file server in the basement, and there's one more source of noise gone. Power loss is certainly a concern - "Oh. Whoops. I forgot to replace the SSD battery. There goes my damn Windows." - but that's really a flaw of the user, and not the hardware.
Yeah, I finally had to replace the batteries in my home UPS a few years ago. Once a decade isn't bad, and I had ample notice. When you apply a little bit of advance thought to your deployment, not even negligence is a problem.

That article about ferroelectric RAM was pretty heartening. That will be a good stepping stone towards computers that never need to reboot. The only problem if that gains acceptance, is that if the most commonly used desktop OS is still being made by Microsoft, you'll actually *need* to have the ability to reboot, just to get back to a sane state.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sun May 28, 2006 1:46 am

clocker wrote:
highlandsun wrote: Well, a Mil-spec product is also going to use more expensive components than automotive spec or consumer grade.
[slight threadjack]This is not necessarily true at all.
In most cases what differentiates a "mil-spec" part from a normal consumer grade part is not the inherent quality/performance of the part itself, rather it's the incredible paper trail that documents that the supplied part is what it claims to be.
Assuming no intent to deceive (which is of course, a danger in any large project) a mil-spec part may actually be inferior to a cutting-edge consumer part simply because the newer, better component exceeds the contract specs or the vendor cannot/will not supply the required documentation.

This is how we end up with $300 toilet seats in goverment projects...they don't last any longer or fit your ass any better but there is an 8000 page file that documents every stage of the manufacture/delivery/install of the part.

The procurement department's ass is well covered, so to speak.[/threadjack]
I never said anything about the components being *higher quality* - as you can plainly see in the text you quoted I just said they were *more expensive*. But since you mention it, there *is* a significant quality difference for a lot of electronic components; many mil-spec components are certified to operate over a much wider temperature range than commercial grade. Many are certified to withstand much higher G-shocks on various axes. I'm not talking about toilet seats here, just talking semiconductors, which is all that matters in a discussion of SSDs.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sun May 28, 2006 1:58 am

inti wrote:Wow! That's a lot of posting in a few hours - slow afternoon at work, guys?

My views on the interesting off topic conversation:
Windows could be a lot less vulnerable to security threats if (a) it were obvious to a user which are operating system files and processes and which are not; (b) if operating system, firmware and third party settings were not all mixed in the same places (the registry, the System32 folder etc). Seems simply bad design. But until Windows Vista, Microsoft has not had security at the forefront of their design requirements. My friend is one of the team at Microsoft who have recently been rewriting every library with security in mind, and apparently it is a major job.
Not just "seems bad design" - *is* bad design. Nobody designs computer hardware with such chaotic jumbling of purposes. Nobody ought to design computer software that way either. Microsoft "engineers" are either morons or asleep at the wheel.
inti wrote: Back on topic, the post I made above about hiberfil.sys probably will not work: at least, it will probably crash your system if there have been changes to the filesystem since the hiberfil.sys image was made - certainly if programs have been installed or uninstalled, maybe even if there have been changes to the registry.
Yeah, makes sense, that means the filesystem cache was stored in the hiberfil too. I was thinking you could still get by if you make sure that all 3rd party startup apps are disabled, so only Windows services are in memory at the time, but obviously the registry will remain a problem. It's always open, always in the buffer cache, and so any registry change will get trashed the next time you boot the hibernation image.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sun May 28, 2006 2:06 am

IsaacKuo wrote: (lots of good stuff omitted)
Moreover, we weren't talking about the current crop of cell phones, we were talking about your notion of cellphones replacing the PC. And in that context, interoperability is essential from a marketing standpoint.
The current trend is proprietary services with limited computer integration. And who can blame them? A proprietary pay service is an extra revenue stream, while easy downloading of pirated media from a home computer isn't.

I'm personally holding out for a combo cell phone/portable USB drive/mp3/video player. At a reasonable price. I personally have no interest in proprietary pay media until they offer the content I desire (mainly Japanese music and videos). But, given the current trends and the current success of the proprietary offerings, I expect I've got a long wait ahead.
Bleah, I'll pass on the MP3 player, I will never pay real money for "music" that's not all there. But I'd go nuts over a combo phone and universal remote for my home A/V system. Consolidating all of the remotes onto my HP-48GX was a good step forward, but I still have to keep that and a phone handy when I'm home watching a movie.

Post Reply