Here is a listing of various ideas I've had. Most of them are neat, some of them are crazy and the rest are just plain unlikely to occur for one reason or another. These are just blurbs so I remember and have some claim to fame, but if you really want more information you're better off contacting me directly.
Electric heat is expensive, but in some parts of the world it's not unusual either because of building design or normally moderate weather. Now electric heat is an interesting case when it comes to efficiency. It doesn't really matter what you do with the electricity because all of it will end up as heat. The only variable is the cost of the heating element itself. What if, instead of a normal resistive element you used microprocessors instead?
With such heaters installed and a basic wifi network these heaters, when heat is demanded could be doing useful and profitable work. Perhaps they mine cryptocoins while heating your living room. Maybe they fold proteins while you sleep. If your house has gigabit or better connectivity maybe they run short lived spot VMs sold to the highest bidder.
Using last generation chips these elements shouldn't be that expensive, but since the power is 'free' they will still be competitive with dedicated solutions. Computational heating would be a novel way to reduce net heating costs.
Here is an entry I had written a while ago and thought I had posted.
Similar to the openFlow idea I think you could create a distributed router quite easily. The advantages would be cheaper hardware and better redundancy all around by increasing the number of components which could be disabled while still routing.
The idea is to move the smarts of the network out to the edges.
The core of the network ends up being very similar to a large ethernet switch, possibly using commodity ethernet switching hardware. No VLANs or anything of that sort are necessary, so fast switching should be easy. At it's simplest the core appears as a unified, large ethernet switch.
To accomplish this some special capabilities would be necessary:
The ability to have the output port programmed by an external entity
The ability to optionally replace the label
The ability to load balance over multiple links
The ability for the switches to report link utilization back to the edges
How it would work is that each port on the distributed router would contain the logic to route to any of the appropriate endpoints. When this routing was complete it would set the destination MAC address of the ethernet packet to the appropriate endpoint or flowid.
There would then be one or more entities which constantly evaluated the loading of the network and changed the switch programming to optimize usage and handle outages.
There is also comes redundant central control unit which performs the routing protocol and configuration of the distributed router. Likely this would be near the flow configurator.
It should be possible to have an entire large network operate using one of these distributed routers. Servers which require high bandwidth could have the necessary daemon/module loaded to have the software equivalent of a router port connected. This is cheap and fast, merely one more layer of routing.
All the hardware can be quite cheap. Since ports only need enough CPU power to operate at line rate for that one port it is quite possible to GigEthernet solely in software. A hardware fastpath may be required for the faster interface modes.
It would be possible to use standard switches via the normal ethernet MAC learning mechanism, but you wouldn't have full control over the flow and so would have less performance. This fact could be useful because you could place special controlled slow switches at key points and then use normal switches between them. These normal switches would appear as links between two flow routers (via one port so you could have multiple normal switch paths) with the added ability of allowing other entry points to enter there. Thus it becomes possible to use existing and cheaper switches to run both the flows and normal traffic (on separate VLANs). It also means that there are more ports to bring dedicated routers online.
That is, a server of some description could run the routing protocols itself and not require a separate port, thus attaching to the network at line rate for little cost.
Canada once grew a good supply of sugar beets, which were used to produce sugar. Brazil has a healthy ethanol fuel industry using sugar cane as their feed stock. There wouldn't seem to be any reason that Canada can't grow sugar beets again which are turned into ethanol as a fuel. Sure corn is a terrible source, but something a source of sugar should be much cheaper.
Given the speed of modern virtualization systems it should be possible to create a VM with idealized virtual hardware. This will allow applications to run similar to applications in the DOS days where each application had only the abstraction they required. I believe that the ability to eliminate most of the abstractions which has built up over the decades will allow higher performance.
When it comes to long term archival of digital data there are two major problems: data format and media readability.
Data formats change with time and new software loses the ability to read old formats. This means the Wordperfect file of the 80s isn't readable today on modern software. The only solution here is to choose the archival format carefully to be a common published standard, such as JPEG or ASCII.
Media readability has two factors: compatibility and degradation. Does your current computer still have a floppy drive? Could you still connect a floppy drive if you knew where to find one? As time passes the readers for media become obsolete and fade out of common usage. If the archived data is stored on a media which can no longer be connected then it is no longer readable.
Media degradation is also a problem. With the passing of time most digital media become more difficult to read in the same way that photographs become more difficult to view due to fading. Floppy disks and hard drives demagnetize, burned CDs and DVDs rot and oxidize, flash simply forgets. You can't read data which is no longer there.
As a solution to this I propose a box made up of PROMs. The current state of the art in PROMs are not dense enough for this use, but I believe the density could be increased significantly. These high density PROMs would be put into a form factor similar to large external hard drives. For commercial use the drive should be equipped with a USB connection. However, every drive should also have an internal interface which is quite simple for use in reverse engineering should USB become uncommon.
Throw a reasonable filesystem on there, perhaps with some additional hardware overwrite protection and you would have an excellent archival solution suitable for several decades of archiving. If the correct materials are chosen perhaps this media could be usable for more than a century.
A large webmail provider, probably only Google will consider doing it, should automatically create an OpenPGP key for every account. This key should be signed by the provider's key to prove that the email address is correctly associated with that key. Then every message sent from that provider should automatically and transparently be signed with the account key. Additionally, every incoming message should have the signature verified, again transparently.
One point of this exercise is to reduce SPAM by some amount by showing big warnings when the signature does not verify. Spam would no longer be able to effectively forge addresses. This would also increase the general security of email world wide as signed email became normal.
Most importantly it must be transparent. If a message is received for which there is no key then display no warnings. If the signature is correct, then display no notices. Only notify the user when something is amiss.
This won't solve all the problems with email, but is a large step in the right direction.
Picture, in your mind's eye, a tablet computer the size of a standard pad of paper. For those of us in North America this is a pad of letter paper. This tablet computer uses primarily stylus input. You use it mostly as you would a pad of paper. You write, you scribble, you think. This tablet tries to maintain all the advantages of paper: free form input, hand drawn diagrams. The tablet will run for days on the same battery without charging.
But this tablet is not paper and thus can be better. No more must you carry pens of several colours; the device will simply display the stylus marks in the colour of your choice. No more are you restricted in the number of pages one can carry around, the tablet will hold hundreds of thousands. You are not restricted to simply moving forward or backwards between pages; you can group pages by topic and navigate between them as easily as you do a bookshelf. No more must you manually copy a page in order to move portions around; you can select free form areas to copy or move. No more much you reach for a calculator to do computation; you can select a block of text to have the computer interpret and solve.
This last point is key. This is no simple store and retrieve tablet. It can compute anything from simple addition through solving equation systems and calculus up to interactive three dimensional graphs and charts. Simply because it is easy, you can also write and run small snippets of code.
This is not a useless tablet good only for surfing the web and catching movies. This is a Work Tablet. Just like your grandfather had, only better.
It seems that algae blooms which cause dead zones are becoming an increasingly large problem. This is caused in large part by fertilizer runoff from agriculture. It is possible that it is commercially viable to filter the algae out of these dead zones to sell as fertilizer.
At the moment there is a rampant problem with fraud and Point of Sale transactions. These problems include card cloning, repeating transactions among others. Both debit and credit cards are affected. The root problem is that the consumer is expected to trust the terminal presented to them. It is well known that there is only weak security when dealing with untrusted hardware. The solution, a fraud proof system, involves providing the user with a minimum of trusted hardware such that they can be ensured of both the value of the transaction, the identity of the payment server and the security between them. All it requires is one microprocessor, a multidigit seven segment LCD display and a button.
When programming I often find myself wishing that the standard or defacto library contained some useful feature, such as basic graph operations. Now these features are not necessarily common, but they are not uncommon and have well known solutions. Yet even though the solutions are well known the implementations are continually recreated.
What I want is a single library which contains all the well known solutions to problems. More so than that, I desire this library to be multi-level. What I mean by this is that it will contain low level functionality (such as simple collections), medium level functionality (graphs and graph operations) as well as high level functionality. All these various levels should be visible at the same time. Also, the higher level functionalities of the library should be implemented in terms of the exposed lower levels.
Such a system with an appropriate language behind it should be able to raise programming out of its current low level of abstraction to ever increasing heights as operations of greater complexity become part of the library.
There are many situations where there is sufficient noise to require earplugs. There are also many situations where you should wear earplugs, but the loss of spacial awareness is just too expensive. What the world needs is an earplug which reduces the volume of the world and ensures that the maximum noise is kept below the safe limit.
I would imagine that this might have a non-linear reduction in amplitude with a maximum. Perhaps it could even amplify quiet sounds. If most or all sounds have a decrease in amplitude it might even be possible to not require a battery.
This is an idea I had a while ago for a dockable computer which acted very much like a smartphone, but could be moved between different docks for larger screens, full size input devices, more memory and computing power. View the PDF.
Terribly boring people who think UML will save us all had some thoughts about coding to a contract. Furthermore with some tool magic and a bit of bondage in the language you can even have an automated process take the contract and produce automatic test cases. This is a tedious process. I think it isn't without merit though, it's just that all the hard work is done by the programmer and not the computer.
I propose, instead, that you have an automated process which takes a function and produces a contract which can be reviewed by the programmer. Now this requires that all the code is built bottom up and that most or all the library functions to either have source available or have appropriate contracts produced.