This is paper was published in Nature last month [0]. It mainly focuses on machine learning algorithms for near-term universal quantum computers (tens to hundreds of qubits). It also talks about machine learning algorithms for quantum annealers like D-Wave's.
It was verified as part of VLISP project. So, I've been keeping papers on PreScheme, Scheme48, and VLISP in case anyone wants to redo them with modern techniques. Another commenter, hga, helped me find a few I didn't have before with the specific algorithms. The basic ones are on the bottom here:
PreScheme itself is possibly worth reviving esp with ref counting or Rust's type of no-GC safety. I know another LISP, Carp, is trying to do something like the latter.
Scheme allows practically infinite possibility of language extension due to its syntax (this is same with lisp and scheme, but I prefer scheme due to its lexical scoping, a minimal standard like R7RS etc). (Note that I'm a big advocate of Python-C hybrid programming too, and so for me it's either python-and-C or scheme-and-C)
Moving towards Scheme-to-C is moving towards solving all performance/concurrency/what-not problems of a high-level-language like Scheme, without having to drop down to a low-level-language like C (or assembly). (I guess I should say that there might be a need for generating code for GPU, e.g., OpenCL. Just a heads up that I'm not trying to ignore that).
What I say has to be qualified with two very important assumptions:
- I believe 'large-scale programming' is 'research'. E.g., if the system you're building is under 20k-50k lines of code, give or take, it largely doesn't matter what paradigm you're using, or what language you're using. But for larger systems, the best way forward is to treat such a complex system as a research topic, and try to work at a higher level of abstraction, watch out for opportunities of creating new DSLs, and so on. (I believe this is part of the reason large projects do better in open source, firefox, linux, chromium, etc, etc. No manager is watching over your shoulders with a stick and a deadline on their notepad.).
- You have to think in terms of meta-programming. It comes in many flavors but take code-generation. Think of the whole purpose of your scheme code is to let you generate human-readable C code. For me, meta-programming means never having to write your program (here the program is the C code), but using tools and machines that write the program for you (here the tool and the machine is Scheme).
E.g., take a very large system like the linux kernel. It's 90% C with some assembly. A big motivation for my ideas comes from asking the question 'Can we develop and work at a high layer of abstraction in Scheme and create a code of ~100K to ~200K lines, that, using the ideas of meta-programming, compiles down to ~20M lines of C code which is functionally equivalent, as well as equally readable, to the current ~20M lines of the linux kernel?' (Note the scheme code size being 1% of the C code size!)
When you think along those lines, and realize that 21st-century large-scale programming is just getting started and something like a linux kernel is just the tip of the iceberg, you probably would reach many of the conclusions and beliefs that I gravitate towards.
Note that my ideas are in stark contrast to people who think the problem is that we need a new programming language. In that regard, I'm anti C++, D, Go, Rust, Swift, Kotlin, etc, etc. We don't need a new programming language. We need to treat the activity of programming as a research activity. (again, I'm talking about large-scale).
They're not kidding when they call it an ecosystem for developing new languages (indeed, racket itself is on some bleeding edges of language research, notably in the area of sound gradual typing with the ongoing typed racket project). Many Racketeers are researchers who take metaprogramming very seriously.
Racket currently uses the GNU lightning VM[1] via a JIT compiler, but is migrating onto the recently much-less-encumbered Chez (thanks, Cisco!).
Lightning and Chez both provide AOT features that Racket can use, although neither can really do source-to-source translation of Racket to C.
If you really want a modern scheme->C, then Chicken Scheme https://www.call-cc.org/ is your best bet. It gains some advantages by using (highly idiomatic) C as an intermediate language, but unfortunately then has to contend with C language idioms and constraints, especially C's runtime, which strongly constrains Chicken's approaches to concurrency.
I find it difficult to imagine that anyone will sit down and write a scheme->C compiler that produces human-readable C. You can see for yourself what Chicken emits, and that's still much more readable than the long-dead DEC Scheme->C's output.
Lastly, doing serious systems programming in Scheme is both feasible and has even been done. Sadly one extremely interesting example of this is decaying at http://www.scsh.net/ (the links to Shivers's papers are broken, and frankly they were the most entertaining parts of SCSH) although some ideas originating in SCSH have been ported to other Scheme implementations.
Chez isn't always the fastest, but it's the fastest so often that I wouldn't look much beyond it if performance were of major importance to me. It's been written about here on HN before, but the next major version of Racket will be based on Chez. Chez will get package management, libraries, and modern tooling of Racket, and Racket will (presumably) be able to capitalize on the performance of Chez.
I've received this criticism of my approach before.
Note I'm talking about not just C code-gen, but human-readable C code-gen. If it's not human-readable, then I agree with you, we might as well generate machine code and become full-time scheme programmers.
But if it's human readable C, as if the code was created by a C programmer with clean coding practices, then it makes a big difference.
You might ask, "but why human readable?"
I think the answer is that as a result, you can work on that C project, try out ideas, pick the one that works best, then go back to scheme to see how you can generate what you just tried out. It's a long-winded way of doing things, but you're doing research, you're not simply taking a hand-written change-set in C and creating the simplest scheme code to generate it (e.g., dumping a sequence of string literals to stdout). You are exploring the higher abstractions of the scheme code, and finding out how the new code that generates new change-set fits in with the rest of the scheme code.
Take 'logging' for example. Logging is orthogonal to a functioning C code. A well written fully functioning 10k lines of C program could easily become 30k-50k lines simply by adding comprehensive logging and not adding a single other feature.
If scheme code-generator has a logging module, it would not generate logging code directly, but it would take the non-logging functionality of the program as input, analyze it, and output logging code as a result. Further more, you could disable logging code-gen, and generate the original 10k lines of code, and then study the C code, and it would help a lot with readability (you're not coming across 10 lines of logging code every 2 or 3 lines of actual code).
Similarly, debuggability. A scheme code could help insert debugging code at relevant points. Heck, we're using scheme as a tool, a machine to help our coding, why not use scheme as an automation agent, i.e., use it to iterate through a debugging process by giving some directions in the beginning and letting it triangulate the exact line in C code which, e.g., causes a segmentation fault.
C code is full of error checks. Every function call is followed by "if (error1) { do_this(); } else if (error2) { do_that(); }". Wish to read the code without error checks? Just re-generate the code with error-checking disabled! Done.
Even more, you use scheme to generate various kinds of data that can be used for analysis. E.g., you wish to visualize the call-graph of the C code? you could generate the graph data file then run it through graphviz. (I know gcc does that already but then what is gcc? an automation tool for your C program! All I'm saying is that all these things can be done in a much better way in scheme)
(again, think of Scheme as a tool and an automation agent, and C code being the actual program!).
I know Nim but moved away after trying it out briefly.
I may not have emphasized in my comment but human-readable code-gen is very important.
Think of scheme as just a tool to generate C code, and C code is your actual code that you wish to deal with (debugging, benchmarking, trying out variations, etc). If the code is not human readable, then you might as well forget about C and become a full-time scheme programmer.
Your GitHub account [0] gives Common Lisp some love. Why did you choose to use this language? Specifically, for the implementation of your Quantum Virtual Machine?
When we started thinking about quantum programming languages, we didn't know what they should look like. Experimenting with different languages efficiently requires a language that makes language-building easy. I find that there are two classes of languages that provide that: Lisp-likes with metaprogramming and ML-likes with a good type system including algebraic data types.
Quil [0], our quantum instruction language, came out of language experimentation in Lisp. In fact, the Quil code:
H 0
CNOT 0 1
MEASURE 0 [0]
used to look like this:
((H 0)
(CNOT 0 1)
(MEASURE 0 (ADDRESS 0)))
This was a no-hassle way to play around without thinking about how to wrangle lex and yacc with shift/shift and shift/reduce issues. If you've ever used them, it takes a while to get what you want, and integrate it into a product you're trying to create.
In addition to the need to construct languages, we also needed to simulate/interpret the language. Simulating the evolution of a quantum state is very expensive (which is why we are building quantum computers!), and is usually relegated to some closer-to-metal language like C.
Unfortunately, in C, high-speed numerical code (that is also blazing fast) is very difficult to experiment with, extend, maintain, etc.
Fortunately, over the past 30 years, Lisp compilers have become excellent at producing native machine code. With a compiler like SBCL [1], you can produce code which is very nearly optimal. For example, we have a function to compute the probability from a wavefunction amplitude. Here it is:
Linear Algebra is very important for Quantum Computing. You have a masters in engineering, so you shouldn't have any problems with the math. I suggest you review the basics of Linear Algebra if you haven't applied that knowledge in a while.
John Preskill's lecture notes are invaluable. They start from the basics of Quantum Computing, to Quantum Theory, all the way to advanced concepts such as Topological Quantum Computation:http://www.theory.caltech.edu/people/preskill/ph229/
These will be enough to get you started, but it is good apply your knowledge by implementing the quantum algorithms that you have learned.There is a huge list of simulators you can use:https://www.quantiki.org/wiki/list-qc-simulators
I know you want references to open courses, but reading papers shouldn't hurt either. I don't know how much experience you have with Quantum Mechanics, but this paper:https://arxiv.org/abs/0708.0261 explains Quantum Computing very well by referring to concepts in Classical Computing. You should read this first if you are not familiar with Quantum Mechanics.
Let me know if you have any questions and good luck!
I wasn't able to find anything about Google's Numerics flow. Would you provide a reference?
> Anybody else active in this field?
Yes. Rigetti Computing has also been working in this area. https://medium.com/rigetti/rigetti-partners-with-cdl-to-driv...