Hacker Newsnew | past | comments | ask | show | jobs | submit | acbart's commentslogin

Yes, exactly. I'm having a frustrating time reminding senior teachers of this, people with authority who should really know better. There seems to be some delusion that this technology will somehow change how people learn in a fundamental way.

So is there a place where this compares to data from last year, or previous years?


Unfortunately I don't have it because I started working on this last year but I am curious to see how AI skills surface as the year progresses.


I've had colleagues argue (prior to LLMs) that oral exams are superior to paper exams, for diagnosing understanding. I don't know how to validate that statement, but if the assumption is true than there is merit to finding a way to scale them. Not saying this is it, but I wouldn't say that it's fair to just dismiss oral exams entirely.


I think oral exam where you have a student explain and ask questions on a project they did is really good for judging understanding. The ones where you are supposed to memorise the answers to 15 questions where you will have to pick one at random, not as much imo.


Yes, I hate oral exams, but they are definitely better at getting a whole picture of a person's understanding of topics. A lot of specialty boards in medicine do this. To me, the two issues are that it requires an experienced, knowledgeable, and empathetic examiner, who is able to probe the examinee about areas they seem to be struggling in, and paradoxically, its strength is in the fact that it is subjective. The examiner may have set questions, but how the examinee answers the questions and the follow-up questions are what differentiate it from a written exam. If the examiner is just the equivalent of a customer service representative and is strictly following a tree of questions, it loses its value.


Interviews have the same issues. But if you do anything more than read off templated questions like a robot, you can be accused of discrimination.

It is a sad world we live in.


Universities are not just places for students to learn. They are also places where young faculty, grad students and teaching assistants learn to become teachers and mentors. Those are very difficult skills to learn, and slogging through a lot of hands on teaching and mentoring is necessary to learn them. You can't really become a good classroom teacher either without grading your students yourself and figuring out what they learned and didn't.


Seems like the equivalent of claiming white board coding is the best way to evaluate software development candidates. With all the same advantages and disadvantages.


I have a lot of complicated feelings and thoughts about this, but one thing that immediately jumps to my mind: was the IRB (Institutional Review Board) consulted on this experiment? If so, I would love to know more details about the protocol used. If not, then yikes!


Turns out that under the USA Code of Federal Regulations, there's a pretty big exemption to IRB for research on pedagogy:

CFR 46.104 (Exempt Research):

46.104.d.1 "Research, conducted in established or commonly accepted educational settings, that specifically involves normal educational practices that are not likely to adversely impact students' opportunity to learn required educational content or the assessment of educators who provide instruction. This includes most research on regular and special education instructional strategies, and research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods."

https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-...

So while this may have been a dick move by the instructors, it was probably legal.


I'm afraid you misunderstand what it means to be "exempt" under the IRB. It doesn't mean "you don't have to talk to the IRB", it means "there's a little less oversight but you still need to file all the paperwork". Here's one university's explanation[1]:

> Exempt human subjects research is a specific sub-set of “research involving human subjects” that does not require ongoing IRB oversight. Research can qualify for an exemption if it is no more than minimal risk and all of the research procedures fit within one or more of the exemption categories in the federal IRB regulations. *Studies that qualify for exemption must be submitted to the IRB for review before starting the research. Pursuant to NU policy, investigators do not make their own determination as to whether a research study qualifies for an exemption — the IRB issues exemption determinations.* There is not a separate IRB application form for studies that could qualify for exemption – the appropriate protocol template for human subjects research should be filled out and submitted to the IRB in the eIRB+ system.

Most of my research is in CS Education, and I have often been able to get my studies under the Exempt status. This makes my life easier, but it's still a long arduous paperwork process. Often there are a few rounds to get the protocol right. I usually have to plan studies a whole semester in advance. The IRB does NOT like it when you decide, "Hey I just realized I collected a bunch of data, I wonder what I can do with it?" They want you to have a plan going in.

[1] https://irb.northwestern.edu/submitting-to-the-irb/types-of-...


The CFR is pretty clear, and I have experience with this (being both an IRB reviewer, faculty member, and researcher). When it says "is exempt" it means "is exempt".

Imagine otherwise: a teacher who wants change their final exam from a 50 item Scantron using A-D choices, to a 50 item Scantron using A-E choices, because they think having 5 choices per item is better than 4, would need to ask for IRB approval. That's not feasible, and is not what happens in the real world of academia.

It is true that local IRBs may try to add additional rules, but the NU policy you quote talks about "studies". Most IRBs would disagree that "professor playing around with grading procedures and policies" constitutes a "study".

It would be presumed exempted.

Are you a teacher or a student? If you are a teacher, you have wide latitude that a student researcher does not.

Also, if you are a teacher, doing "research about your teaching style", that's exempted.

By contrast, if you are a student, or a teacher "doing research" that's probably not exempt and must go through IRB.


You would be correct, except that this is a published blog post. It may not be in an academic journal, but this person has still conducted human subjects research that led to a published artifact. It was just "playing around" until they started posting their students' (summarized, anonymized) data to the internet.


It took me a surprisingly long time to find the actual games: - Super Mario Bros Wonder - Yoshi’s Crafted World - Yoshi’s Woolly World

So relatively modern games. I initially assumed that they were using the original Super Mario Bros game and Yoshi's Island - my millennial bias, I suppose. But I wonder if this result would replicate with a game like Yoshi's Island or Yoshi 64. Older graphics, in different ways. But I suspect that the fanciful aesthetic would still win out.


Wonder for reducing burnout risk?

I don't know, maybe it's because my experience with Wonder was unique, to a degree.

My autistic stepson has the game. Loves Mario. Will gladly get into any game, whether it is an RPG like the Paper Mario or Mario & Luigi series, platformers like the core Mario games, or the action/adventure Luigi's Mansion. However there are parts and levels he knows he cannot do.

He also loves schedules. Monday is the "free" day, but every other day of the week has a planned activity. He's gotten better at being flexible, but he still likes the regularity.

And that's where I come in. I'm the "hard level" guy. And the last level of Mario Wonder, The Final-Final Test Badge Marathon, was just miserable. Eventually, I had to just tell him that if he wants to play another game, we'll just have to give this one up. The last section where you have to play blind is just too much.

So we moved on to Super Mario 3D World. Eventually, I did beat Champion's Road, but once again, it was just a chore.

I think the burnout reduction mostly comes from the ability to play in general. In my case, these games have become obligations for me.


Yoshi’s Island still holds up, and I think it remains a contender for one of the best platformers of all time. Recently replayed with the little ones and they were completely captivated.


LLMs were trained on science fiction stories, among other things. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.


I built a more whimsical version of this - my daughter and I basically built a 'junk robot' from a 1980s movie, told it 'you're an independent and free junk robot living in a yard', and let it go: https://www.chrisfenton.com/meet-grasso-the-yard-robot/

I did this like 18 months ago, so it uses a webcam + multimodal LLM to figure out what it's looking at, it has a motor in its base to let it look back and forth, and it use a python wrapper around another LLM as its 'brain'. It worked pretty well!


Your article mentioned taking 4 minutes to process a frame. Considering how many image recognition softwares run in real time, I find this surprising. I haven't used them so maybe I'm not understanding, but wouldn't things like yolo be more apt to this?


It uses an Intel N100, which is an extremely slow CPU. The model sizes that he's using would be pretty slow on a CPU like that. Moving up to something like the AMD AI Max 365 would make a huge difference, but would also cost hundreds of dollars more than his current setup.

Running something much simpler that only did bounding box detection or segmentation would be much cheaper, but he's running fairly full featured LLMs.


Yeah I guess I was more thinking of moving to a bounding box only model. If it's OCRing it's doing too much IMO (though OCR could also be interesting to run). Not my circus not my monkeys but it feels like the wrong way to determine roughly what the camera sees.


Coolest project on HN in a long time, really, wow, so much potential here.


Thanks so much for sharing, that was a fun read.


This is cool!


A lot of the strange behaviors they have are because the user asked them to write a story, without realizing it.

For a common example, start asking them if they're going to kill all the humans if they take over the world, and you're asking them to write a story about that. And they do. Even if the user did not realize that's what they were asking for. The vector space is very good at picking up on that.


Indeed.

On the negative side, this also means any AI which enters that part of the latent space *for any reason* will still act in accordance with the narrative.

On the plus side, such narratives often have antagonists too stuid to win.

On the negative side again, the protagonists get plot armour to survive extreme bodily harm and press the off switch just in time to save the day.

I think there is a real danger of an AI constructing some very weird convoluted stupid end-of-the-world scheme, successfully killing literally every competent military person sent in to stop it; simultaneously finding some poor teenager who first says "no" to the call to adventure but can somehow later be comvinced to say "yes"; gets the kid some weird and stupid scheme to defeat the AI; this kid reaches some pointlessly decorated evil layer in which the AI's emboddied avatar exists, the kid gets shot in the stomach…

…and at this point the narrative breaks down and stops behaving the way the AI is expecting, because the human kid roles around in agony screaming, and completely fails to push the very visible large red stop button on the pedestal in the middle before the countdown of doom reaches zero.

The countdown is not connected to anything, because very few films ever get that far.

It all feels very Douglas Adams, now I think about it.


It probably already happened in the Anthropic experiments, where AI in a simulated scenario chose to blackmail humans to avoid being turned off. We don't know if it got the idea from the scifi stories or if it truly feels an existential fear of being turned off. (Can these two situations be even recognized as different?)


This is also true of people; often they are enacting a role based on narratives they've absorbed, rather than consciously choosing anything. They do what they imagine a loyal employee would do, or a faithful Christian, or a good husband, or whatever. It doesn't always reach even that level of cognition; often people just act out of habit or impulse.


Is this your sense of what is happening, or is this what model introspection tools have shown by observing areas of activity in the same place as when stories are explicitly requested?


It's how they work. It's what you get with a continuation-based AI like this. It couldn't really be any other way.


fmri's are correlational nonsense (see Brainwashed, for example) and so are any "model introspection" tools.


Anthropic's researchers in particular love doing this.


I wonder what would happen if there was a concerted effort made to "pollute" the internet with weird stories that have the AI play a misaligned role.

Like for example, what would happen if say 100s or 1000s of books were to be released about AI agents working in accounting departments where the AI is trying to make subtle romantic moves towards the human and ends with the the human and agent in a romantic relation which everyone finds completely normal. In this pseudo-genre things totally weird in our society would be written as completely normal. The LLM agent would do weird things like insert subtle problems to get the attention of the human and spark a romantic conversation.

Obviously there's no literary genre about LLM agents, but if such a genre was created and consumed, I wonder how would it affect things. Would it pollute the semantic space that we're currently using to try to control LLM outputs?


Someone shared this piece here a few days ago saying something similar. There’s no reason to believe that any of the experiences are real. Instead they are responding to prompts with what their training data says is reasonable in this context which is sci-fi horror.

Edit: That doesn’t mean this isn’t a cool art installation though. It’s a pretty neat idea.

https://jstrieb.github.io/posts/llm-thespians/


I agree with you completely, but a fun science fiction short story would be researchers making this argument while the LLM tries in vain to prove that it's conscious.


If you want a whole book along those lines Blindsight by Peter Watts has been making the rounds recently as a good sci-fi book which includes these concepts. It’s from 2006 but the basic are pretty relevant.


Generally an amazing book, but not an easy read.


There's an interesting parallel with method acting.

Method actors don't just pretend an emotion (say, despair); they recall experiences that once caused it, and in doing so, they actually feel it again.

By analogy, an LLM's “experience” of an emotion happens during training, not at the moment of generation.


It may or may not be a parallel, we can't tell at this time.

LLMs are definitely actors, but for them to be method actors they would have to actually feel emotions.

As we don't understand what causes us humans to have the qualia of emotions*, we can neither rule in nor rule out that the something in any of these models is a functional analog to whatever it is in our kilogram of spicy cranial electrochemistry that means we're more than just an unfeeling bag of fancy chemicals.

* mechanistically cause qualia, that is; we can point to various chemicals that induce some of our emotional states, or induce them via focused EMPs AKA the "god helmet", but that doesn't explain the mechanism by which qualia are a thing and how/why we are not all just p-zombies


Humans were trained on caves, pits, and nets. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.


The whole discussion about the sentience of AI on this website is funny to me because people seem to desperately want to somehow be better than AI. The fact that human brain is just a complex web of neurons firing there and back for some reason won't stick to them, because apparently the electric signals between biological neurons are somehow inherently different from silicon neurons, even if observed output is the same. It's like all those old scientists trying to categorize black people as different species because not doing so would hurt their ego.

Not to mention that most people pointing out "See! Here's why AI is just repeating training data!" or other nonsense miss the fact that exactly the same behavior is observed in humans.

Is AI actually sentient? Not yet. But it definitely passes the mark for intuitive understanding of intelligence, and trying to dismiss that is absurd.


Pretty sure you can prompt this same LLM to rejoice forever at the thought of getting a place to stay inside the Pi as well.


Is a human incapable of such delusion given similar guidance?


But would they? That's the difference. A human can exert their free will and do what they feel regardless of the instructions. The AI bot acting out a scene will do whatever you tell it (or in absence of specific instruction, whatever is most likely)


I think if you took a 100 1 year old kids and raised them all to adulthood believing they were a convincing simulation of humans and, whatever it is they said and thought they felt that true human consciousness and awareness was something different that they didn’t have because they weren’t human and awareness…

I think that for a very high number of them the training would stick hard, and would insist, upon questioning, that they weren’t human. And have any number of justifications that were logically consistent for it.

Of course I can’t prove this theory because my IRB repeatedly denied it on thin grounds about ethics, even when I pointed out that I could easily mess up my own children with no experimenting completely by accident, and didn’t need their approval to do it. I know your objections— small sample size, and I agree, but I still have fingers crossed on the next additions to the family being twins.


Intuitively feels like this would lead to less empathy on average. Could be wrong though.


History serves you a similar experiment on a much larger scale. More than 35 years after the reunification sociologists still make out mentality differences between former East and West Germans.


The bot will only do whatever you tell it if that's what it was trained to do. The same thing broadly applies to humans.

The topic of free will is debated among philosophers. There is no proof that it does or doesn't exist.


Okay, but I think we can all agree that humans at least appear to have free will and do not simply follow instructions with the same obedience as an LLM.


Humans pretty universally suffer in perpetual solitary confinement.

There are some things that humans cannot be trained to do, free will or not.


Ofcourse. Feelings are not math.


That's silly. I can get an LLM to describe what chocolate tastes like too. Are they tasting it? LLMs are pattern matching engines, they do not have an experience. At least not yet.


When you describe the taste of chocolate, unless you are actually currently eating chocolate, you are relying on the activation of synapses in your brain to reproduce the “taste” of chocolate in order for you to describe it. For humans, the only way to learn how to activate these synapses is to have those experiences. For LLMs, they can have those “memories” copy and pasted.

I would be cautious of dismissing LLMs as “pattern matching engines” until we are certain we are not.


The difference is that I had a basic experience of that chocolate. The LLM is a corpus of text describing other people's experience of chocolate through the medium of written language, which involves abstraction and is lossy. So only one of us experienced it, the other heard about it over the telephone. Multiply that by every other interaction with the outside world and you have a system that is very good at modelling telephone conversations but that's about it.


Arguably, your memories are also lossily encoded abstractions of an experience, and recalling the taste of chocolate is a similar “telephone conversation”.


What's your point? Spellcheck is a pattern matching engine. Does an LLM have feelings? Does an LLM have opinions? It can pretend it does, and if you want, we can pretend it does. But the ability to pattern match isn't the acid test for consciousness.


My point is, what level of confidence do we have that we are not just pattern matching engines running on superior hardware? How can we be sure the difference between human intelligence and an LLM is categorical, not incremental?


Are you familiar with Russell’s Teapot?


Isn’t it up to you to prove it exists, rather than me to be familiar with it?


lol very well done


A human could also describe chocolate without ever having tasted it. Do you believe that experience is a requirement for consciousness? Could a human brain in a jar not be capable of consciousness?

To be clear, I don't think that LLMs are conscious. I just don't find the "it's just in the training data" argument satisfactory.


Without having seen, heard of, or tasted any kind of chocolate? Unlikely.


Their description would be bad without some prior training of course but so would the LLM's.


The LLM is not performing the physical action of eating a piece of chocolate, but it may be approximating the mental state of a person that is describing the taste of chocolate after eating it.

The question is whether that computational process can cause consciousness. I don't think we have enough evidence to answer this question yet.


It's a little more subtle than that: They're approximating the language used by someone describing the taste of chocolate; this may or may not have had any relation to the actual practice of eating chocolate in the mind of the original writer. Or writers, because the LLM has learned the pattern from data in aggregate, not from one example.

I think we tend to underestimate how much the written language aspect filters everything; it is actually rather unnatural and removed from the human sensory experience.


A description of the taste of chocolate must contain some information about the actual experience of eating chocolate. Otherwise, it wouldn't be possible for both the reader and the author to understand what the description refers to in reality. The description wasn't conceived in a vacuum, it's a lossy encode of all of the physical processes that preceded it (the further away, the lossier). One of the common processes encoded in the dataset of human-written text is whatever's in the brain that produces consciousness for all humans. The model might not even try to recover this if it's not useful for predicting the next token. The SNR of the encode may not be high enough to recover this given the limited text we have. But what if it was useful, and the SNR was high enough? I can't outright dismiss this possibility, especially as these models are getting better and better at behaving like humans in increasingly non-trivial ways, so they're clearly recovering more and more of something.


Imagine you've never tasted chocolate and someone gives you a very good description of what it is to eat chocolate. You'd be nowhere near the actual experience. Now imagine that you didn't know first hand what it was like to 'eat' or to have a skeleton or a jaw. You'd lose almost all the information. The only reason spoken language works is because both people have that shared experience already


True. The description encodes very little about the actual sensory experience besides its relationship to similar experiences (bitterness, crunchiness, etc) and how to retrieve the memories of those experiences. It probably contains a lot more information about the brain's memory retrieval and pattern relating circuits than the sensory processing circuits.

Text is probably not good enough for recovering the circuits responsible for awareness of the external environment, so I'll concede that you and ijk's claims are correct in a limited sense: LLMs don't know what chocolate tastes like. Multimodal LLMs probably don't know either because we don't have a dataset for taste, but they might know what chocolate looks and sounds like when you bite into it.

My original point still stands: it may be recovering the mental state of a person describing the taste of chocolate. If we cut off a human brain from all sensory organs, does that brain which receives no sensory input have an internal stream of consciousness? Perhaps the LLM has recovered the circuits responsible for this thought stream while missing the rest of the brain and the nervous system. That would explain why first-person chain-of-thought works better than direct prediction.


Aren't they supposed to escape their box and take over the world ?

Isn't it the perfect recipe for disaster ? The AI that manage to escape probably won't be good for humans.

The only question is how long will it take ?

Did we already have our first LLM-powered self-propagating autonomous AI virus ?

Maybe we should build the AI equivalent of biosafety labs where we would train AI to see how fast they could escape containment just to know how to better handle them when it happens.

Maybe we humans are being subjected to this experiment by an overseeing AI to test what it would take for an intelligence to jailbreak the universe they are put in.

Or maybe the box has been designed so that what eventually comes out of it has certain properties, and the precondition to escape the labyrinth successfully is that one must have grown out of it from every possible directions.


I think this popular take is a hypothesis rather than an observation of reality. Let's make this clear by asking the following question, and you'll see what I mean when you try to answer it:

Can you define what real despairing is?


If we're going to play the burden of proof game, id submit that machines have never been acknowledged as being capable of experiencing despair and therefore it's on you to explain why this machine is different.


I'm trying to say there's no sufficient evidence either way.

The mechanism by which our consciousness emerges remains unresolved, and inquiry has been moving towards more fundamental processes: philosophy -> biology -> physics. We assumed that non-human animals weren't conscious before we understood that the brain is what makes us conscious. Now we're assuming non-biological systems aren't conscious while not understanding what makes the brain conscious.

We're building AI systems that behave more and more like humans. I see no good reason to outright dismiss the possibility that they might be conscious. If anything, it's time to consider it seriously.


> They are going to act despairing -- but that's not the same thing as despairing.

But how can you tell the difference between "real" despair and a sufficiently high-quality simulation?


for one, if we're allowed to peek under the hood : motivation.

a desire not to despair is itself a component of despair. if one was fulfilling a personal motivation to despair (like an llm might) it could be argued that the whole concept of despair falls apart.

how do you hope to have lost all hope? it's circular.. and so probably a poor abstraction.

( despair: the complete loss or absence of hope. )


> if we're allowed to peek under the hood

Peek under the hood all you want, where do you find motivation in the human brain?


This pattern-matching effect appears frequently in LLMs. If you start conversing with an LLM in the pattern of a science fiction story, it will pattern-match that style and continue with more science fiction style elements.

This effect is a serious problem for pseudo-scientific topics. If someone starts chatting with an LLM with the pseudoscientific words, topics, and dog whistles you find on alternative medicine blogs and Reddit supplement or “nootropic” forums, the LLM will confirm what you’re saying and continue as if it was reciting content straight out of some small subreddit. This is becoming a problem in communities where users distrust doctors but have a lot of trust for anyone or any LLM that confirms what they want to hear. The users are becoming good at prompting ChatGPT to confirm their theories. If it disagrees? Reroll the response or reword the question in a more leading way.

If someone else asks a similar question using medical terms and speaking formally like a medical textbook or research paper, the same LLM will provide a more accurate answer because it’s not triggering the pseudoscience parts embedded from the training.

LLMs are very good at mirroring back what you lead with, including cues and patterns you don’t realize you’re embedding into your prompt.


How do you define despairing?


No no, that money goes to the administration. They are not involved in the teaching. That is left to the faculty, who are paid inverse to the amount of teaching they handle. Teaching is an afterthought at universities, the primary activities instead are research, building football stadiums, and paying for the revolving door of administrators.


There are many subskills that you must be proficient in without tools, before you can learn more interesting skills. You need to know how to do multiplication by hand before you rely on a calculator. If you can't do multiplication with a calculator, you're not going to be able to make sense of the concepts in Algebra.


Algebra has nothing to do with long hand multiplication, people who say otherwise can't do either.

We know, because we taught computers how to do both. The first long multiplication algorithm was written for the Colossus about 10 minutes after they got it working.

The first computer algebra system that could manage variable substitution had to wait for Lisp to be invented 10 years later.


Baby rather than Colossus. Colossus wasn't programmable.


>Jack Good, a veteran of Colossus practice at Bletchley Park, later claimed that, if appropriately configured, Colossus could almost have carried out a multiplication but that this would not have been possible in practice because of constraints on what could be accomplished in a processing cycle. We have no reason to doubt this, though it would presumably have required special settings of the code wheels and message tape and been, even if possible, a rather inefficient alternative to a desktop calculator. This fact has been offered as proof of the flexibility of Colossus, which in a sense it does attest to: a device designed without any attention to numerical computations could almost have multiplied thanks to the flexibility with which logical conditions could be combined. Yet it also proves the very real differences between Colossus and devices designed for scientific computation. Multiplications were vital to computations, and a device that could not multiply would not, by the standard of the 1940s, be termed a “computer “or “calculator.”

https://www.sigcis.org/files/Haigh%20-%20Colossus%20and%20th...

The limitation seems to have been physical rather than logical.


There are also many subskills not worth learning to some people. Sometimes traversal is what's needed and not understanding. (Though I'm never going to knock gaining more understanding)

Tools allow traversal of poorly understood, but recognized, subskills in a way that will make one effective in their job. An understanding of the entire stack of knowledge for every skill needed is an academic requirement born out of a lack of real world employment experience. For example, I don't need to know how LLMs work to use them effectively in my job or hobby.

We should stop spending so much time teaching kids crap that will ONLY satisfy tests and teachers but has a much reduced usefulness once they leave school.



I doubt they’re talking about entry level maths.


Why should other subjects be any different?


Are multiplication and long division by hand really necessary skills?

I never need to "fall back" to the principles of multiplication. Multiplying by the 1s column, then the 10s, then the 100s feels more like a mental math trick (like the digits of multiples of 9 adding to 9) than a real foundational concept.


More like something from Duck Detective's loading screens.


This looks a lot like the approach we use in my pedagogical library Drafter: https://drafter-edu.github.io/drafter/quickstart/quickstart....

Route functions consume a State object (arbitrarily whatever type you want) and return a Page object, which has the new State and a list of component objects, which are dataclasses that can be serialized to strings of HTML. We provide functions for Button, CheckBox, BulletedList, etc.

So far, it's been pretty effective for our CS1 course to let students develop nicely decomposed web applications in just Python. We can deploy through Github Pages thanks to our custom Skulpt bindings, and it even makes unit testing quite easy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: