A few weeks ago something I’ve been thinking about for a while came back into my thoughts. What if instead of multithreading at the application level, we instead dedicate a chip of a multicore processor to a specific OS resource.
What kind of OS resource? I don’t know. Maybe four processors at ring zero, or at the IO subsystems. We could have application specific cores that are optimized for specific software tasks, just as the new graphic processors are used for APU.
I’m certainly not a system designer and the above is probably pure drivel. Like, how would context switching work and not slow everything down? However, there are possible reasons why perhaps its not.
- Number of cores will keep growing.
- GPUs will become more important and powerful.
- Future applications will be distributed and require advanced resource handling, example, hierarchical robust device systems, like memory handling.
- Thermal limitations will demand new forms of optimization.
- AI and robotics applications will reach a tipping point and become ubiquitous, and even more performance will be demanded from computing systems
After thinking about this, I later started seeing articles on internet-on-a-chip, a new way of connecting multicores. Interesting.
- Mesh net ties Internet-on-a-chip with multi-cores
- Accelerated processing unit
- Multi-core processor
- Pluggable End User Processors
- Parallel Threaded Interpretation of Sequential Code
- FORTH language processor on comet
- Why not store numbers as diff of previous number?
- Contextual State Transfer