We already support OpenAI and Anthropic endpoints, and can add models/endpoints quickly based on your requirements. We plan to expand to Llama and other self-hosted models soon. Do you have a specific model you want supported?
Code thats written in Lisp is using AST differently. It makes the process of generating machine code much easier. This in turn enables macros which is meta programming not available in non Lisp languages. However on the other hand I tried this avenue and since most modern computing is not Lisp based it severely limits its potential. I'm hoping for a Rust based Clojure or variant. Clojure has the problem its based on the java ecosystem which has severe downsides. A lisp thats based on python doesnt make much sense to me personally python isnt a good language to write other languages in. I think Zig and Rust would be the interesting choices. One attempt: https://github.com/clojure-rs/ClojureRS
Wouldn't it make more sense then to compile existing languages to a Lisp? From what you said, it sounds like the goal of Lisp making generation of machine code faster/easier? Or is it that forcing programmers to encode there intent into a Lisp removes guessing and optimization overhead for the compiler?
You can invent another syntax with Lisp/Scheme macros if you want. When compiled or interpreted it will be macro-expanded, and then likely transpiled to an AST and then compiled into byte- or machine code.
Take a look at Racket languages for some examples.
Lisp syntax with the parens and so on means editing is inherently structural, which makes it relatively easy to reason about and restructure the code. In Python spaces have double meanings, both as separator between tokens and as a block separator, similar to e.g. {} or () in other languages. That makes structural editing relatively hard.
As I understand, that's pretty much exactly how WASM works. It can output either a `.wasm` binary or the same code in a `.wat` text format that looks like this:
yes, you can think of Lisp almost as an intermediate language. Lisp probably lends itself well to machine code generation but I haven't done enough assembly to really know that. its not designed for that, its just a side effect of the language primitives being very very short. you can write a basic Lisp interpreter in a few hours yourself https://norvig.com/lispy.html. Creating a decent compiled language takes a lot longer than that. Lisp only requires 5 or so primitives and it doesn't have a grammar.
it is a bit ackward for humans but machines can process it better because it has less structure. for example what I thought is that Lisp could potentially be a great choice to interop with Large Language Models with, because its potentially shorter code. Good clojure code can be 5-10x shorter than python code. With LLMs size of code matters a lot.
With multiplication the question makes sense due to the commutative property but division does not have that so the question becomes ambiguous... And now I see that the model even points this out.
There is no ambiguity, the problem is that three numbers, divided together, without the order specified, must be equal to their sum.
You can find solutions for a / b / c, or b / c / a, or c / a / b, any combination of them and the solution will be correct according to the problem description.
Besides, what's does it even has to do with it concluding with confidence:
"The fundamental issue is that division tends to make numbers
smaller. It's mathematically impossible to
find three numbers where these operations result in the same value."?
> You can find solutions for a / b / c, or b / c / a, or c / a / b
This is a clear case of ambiguity.
Even the classic question is ambiguous: "Which 3 numbers give the same result when added or multiplied together?"
Lets say the three numbers are x, y and z and the result is r. A valid interpretation would be to multiply/add every pair of numbers:
x * y = r
y * z = r
x * z = r
x + y = r
y + z = r
x + z = r
However, I do not think that this ambiguity is the reason why OpenAI o1 fails here. It simply started with an untractable approach to solve this problem (plugging in random numbers) and did not attempt a more promising approach because it was not trained to do so.
So, there is no chance to answer the original question incorrectly by picking any specific order.
Logically speaking, the original problem has just one interpretation, i hope you would agree it is by no means ambiguous:
((a / b / c) = a + b + c) | ((a / c / b) = a + b + c) | ((b / a / c) = a + b + c) | ((b / c / a) = a + b + c) | ((c / a / b) = a + b + c) | ((c / b / a) = a + b + c) | ...(other 6 combinations) = true
This interpretation would indeed find all possible solutions to the problem, accounting for any potential ambiguity in the division order.
Does the commutative property change anything here? A, B and C are not constrained in any way to each other, so they can be in whatever order you want anyways...
Moreover, addition is commutative so it doesn't matter what order the division is in since a/b/c = a+b+c = c+a+b = ...
So I'd say that the model pointing this out is actually a mistake and it managed to trick you. Classic LLM stuff: spit out wrong stuff in a convincing manner.
Order doesn't matter with multiplication (eg: (20 * 5) * 2 == (5 * 2) * 20) but it obviously does with division ((20/5)/2 != (2/5)/20) so the question doesn't make sense. It's you making grade-school level mistakes here.
The question makes perfect sense. Here it is written in logical language. I'm curious at which point does it stop making sense for you?
numbers divided together
↓----------↓
((a / b / c) = a + b + c) ← numbers added together
| ((a / c / b) = a + b + c)
| ((b / a / c) = a + b + c)
| ((b / c / a) = a + b + c)
| ((c / a / b) = a + b + c)
| ((c / b / a) = a + b + c)
| ((a / (b / c)) = a + b + c)
| ((a / (c / b)) = a + b + c)
| ((b / (a / c)) = a + b + c)
| ((b / (c / a)) = a + b + c)
| ((c / (a / b)) = a + b + c)
| ((c / (b / a)) = a + b + c) = true
What? It's a single logical equation, not a system of equations you gpt-head. There are 12 expressions with OR signs between then and they must be equal to true, meaning any one of them must be true. In your prompt to LLM you messed up the syntax by starting with an OR sign for some reason
By the way my LLM tells me that it's a deep and thoughtful dive into the problem, which accounts for the potential ambiguity to find all possible solutions, so try better.
API not down. instead of waiting, started simple python code to interact with chatgpt. didn't see another repo for it so far, maybe someone else knows a good one
"The National Security Agency (NSA) has recommended only using 'memory safe' languages, like C#, Go, Java, Ruby, Rust, and Swift, in order to avoid exploitable memory-based vulnerabilities."
Yes seriously. The west is getting hacked and owned on a daily basis. The NSA recommendation shows that governments are starting to identify where the problem is.