Saturday, December 19, 2009

Why Aren't We Pouring More Efforts Into This?

There has been a lot of progress lately in the area of carbon nanotube (CNT) research. How about a look atCNT transistors.
Carbon nanotubes are a promising material for making display control circuits because they're more efficient than silicon and can be arrayed on flexible surfaces. Until recently, though, making nanotubes into transistors has been a painstaking process. Now researchers at the University of Southern California have demonstrated large, functional arrays of transistors made using simple methods from batches of carbon nanotubes that are relatively impure.
When the industry gets this worked out we will have much faster transistors capable of very high heat dissipation. Wouldn't it be nice to have high power electronics that was loafing along at 200°C?

And the process control for making carbon nanotube and graphene materials is in its infancy the results are looking better all the time.
Single-walled carbon nanotubes can be classified as either metallic or semiconducting, depending on their conductivity, which is determined by their chirality. Existing synthesis methods cannot controllably grow nanotubes with a specific type of conductivity. By varying the noble gas ambient during thermal annealing of the catalyst, and in combination with oxidative and reductive species, we altered the fraction of tubes with metallic conductivity from one-third of the population to a maximum of 91%.
Carbon nanotube conductivity has been measured to be around 5X that of copper. Think of what could be done with low weight, high strength, high conductivity wire. Motors. Transformers. Transmission lines. Antennas. etc.

Fortunately there have been some breakthroughs in the wire making area.
A new method for assembling carbon nanotubes has been used to create fibers hundreds of meters long. Individual carbon nanotubes are strong, lightweight, and electrically conductive, and could be valuable as, among other things, electrical transmission wires. But aligning masses of the nanotubes into well-ordered materials such as fibers has proven challenging at a scale suitable for manufacturing. By processing carbon nanotubes in a solution called a superacid, researchers at Rice University have made long fibers that might be used as lightweight, efficient wires for the electrical grid or as the basis of structural materials and conductive textiles.

Others have made carbon-nanotube fibers by pulling the tubes from solid hair-like arrays or by spinning them like wool as they emerge from a chemical reactor. The problem with starting from a solid, says Rice chemical engineering professor Matteo Pasquali, is that "the alignment is not spectacular, and these methods are difficult to scale up." The better aligned and ordered the individual nanotubes in a larger structure, the better the collective structure's electrical and mechanical properties. Using the Rice methods, well-aligned nanotube fibers can be made on a large scale, shot out from a nozzle similar to a showerhead.
So my question is: with all the money going out to the bankers why isn't more being spent on science like his that will actually make a difference?

Wednesday, November 18, 2009

Engines Of Prosperity

In my last post I discussed Forth as a language.

A language that is based on a virtual machine. What if that virtual machine was turned into a real machine? Good things. For one operations can be done in parallel. Returns can be automatically initiated at the end of an instruction cycle. And except for a few special cases Forth machines are two stack zero operand machines. Thus instruction bits that would otherwise need to be used to designate registers are freed up for other uses. The two stacks are the return stack and the data stack. This means data does not need to be flushed from the return stack on a return. Which means you can nest subroutines easily and upon return the data required for the next operation is at the top of the stack. The process is not totally automatic. But it is nearly so. As you can imagine, eliminating a stack thrash on return from a subroutine is a very good idea. And having the data right where you need it for the next operation is a time saver too. Another time saver is that because "registers" are actually stack items you can have as many "registers" as you need in a machine just by making the stacks deeper. At least if you are designing with an FPGA.

But first a nod to the man who kicked all this off Charles Moore.

Masterminds of Programming: Conversations with the Creators of Major Programming Languages (Theory in Practice) Is a book about a number of programming language designers and how they made the decisions they did. One reviewer had this to say about the interview with Moore, "The interview with charles moore is completely insane, in a good way."

Forth machines come in many flavors. Phil Koopman in his book, Stack Computers: the new wave, discusses the design issues in building stack machines. In addition there are a number of examples of machines that have been built. You can also read Phil's book for free on line at Stack Computers - Phil Koopman's Page. In addition you can download a copy from Phil's page.

From: A very short bio of Charles Moore

In 1983 Moore founded Novix, Inc., where he developed the NC4000 processor. This design was licensed to Harris Semiconductor which marketed it as the RTX2000, a radiation hardened stack processor which has been used in numerous NASA missions. The RTX2000 patent number is 4,980,821. The patent was filed on March 24, 1987 and issued on December 25, 1990. So by any measure the patent has expired. You can look up the patent at the US Patent Office.

Here is a link to the RTX2010 data page. The device is no longer in production.

To get the pluses and minuses of such a design Phil Koopman among others compares the RTX2000 to other architectures of its day (1992).

So what do you do if you want a Forth processor these days? You get out an FPGA and program it. Because the design of the processor is so simple they are easy to impliment and test and they don't use a lot of gates. With stacks internal to the machine there is no waiting to get stack data. And since internal stacks can be circular there is no need to flush a stack (change the stack pointer) if you have no further need for the stored data.

So where do you go for such a machine? I have worked with a 16 bit Forth machine that John Rible designed in about 1998 that was very nice. John did the architecture and Cadence did the implimentation. John currently does Forth chips in Verilog for FPGAs. His www site is: Sandpipers. Another place to get a Forth chip is at opencores.com. There is a Forth core that is available for download.

Of course he can help with architecture as I can. I did a few tweaks on the processor John designed. I also have a few ideas of my own for a 32 bit machine.

So how about an assembler/Forth for such a machine? It is pretty easy to write one in Forth. Or Forth Inc. will do the job for you. You can contact them at Forth Inc. Or you could ask me. And think about it: For many of the basic instructions the assembly code maps directly to the machine code. Pretty slick and runs fast too.

If you need help with your Forth Chip design, or even tech writing you can contact M. Simon by getting his e-mail from the sidebar at IEC Fusion Technology.

Monday, September 07, 2009

Pay Pal

Donate as produced by PayPal




is easy.

====

Donate from PaC
Writing is how I earn a living. If you like what you read here:
Make A Donation Today









==






$25 monthly





====






$50 monthly







Monday, January 05, 2009