One of the maintainers of Jovo here. We spent the last 12 months completely rebuilding the framework and are happy to finally share it with a way better architecture.
Some features:
- Cross-platform: Works on the web, voice platforms (like Alexa and Google Assistant), and chat platforms (like Facebook Messenger, Instagram, and Google Business Messages).
- Fast: A CLI, local development, and browser-based debugging using the Jovo Debugger.
- Component-based: Build robust experiences based on reusable components.
- Multimodal: An output template engine that translates structured content into voice, text, and visual responses.
- Extensible: Build Framework plugins, CLI plugins, and leverage many integrations from the Jovo Marketplace.
- Integrated: Works with many NLU and CMS services.
- Robust: Includes staging and a unit testing suite.
Congrats on launching! I've played with Roam Research and Obsidian before and like the concept of a knowledge graph. Is the Hypernotes graph similar to those? Any differentiating features?
Thanks! The knowledge graph is actually more helpful for the user and also has more functions to it. You can interactively link pages together and also create new pages on the graph itself for example.
Many people who build Jovo apps (mostly for 1-on-1 conversational experiences, not group chats) host them on serverless environments like AWS Lambda and use document databases like AWS Lambda. Haven't seen problems with scalability there.
We're currently investigating sockets though, which won't work on Lambda. Going to be interesting to see how this scales
Thanks for the info - looking at the project notes I was able to see I was also confused as to if / which? other services this uses and what the privacy implications are - I wondered if it was using google translate or transcribe or similar 3rd party services and such.
Or if this is all self contained, perhaps adding some 'host your own, no 3rd party required, and keep it all private' as a selling point.
That's a great suggestion, thank you! The demos use Google Web Speech as speech recognition service, and NLP.js as natural language understanding (open source) service.
Mostly speech to text, probably more on mobile then destop right now.
People are getting more used to interacting with technology using their voice (e.g. through Google Assistant) this is why we want to help developers offer "deep links" into their web functionality with fast speech input.
Great question! At this moment, Drift is convenient for us because I can reply directly from Slack. We have an internal demo where we can do this with our widget, too, it was a bit too early for this release though. Soon!
You know what, you're right. I've been getting a bit annoyed by Drift anyways. So until we launch our own automated chat and search widget, I might as well just remove the current one. Done
Some features:
- Cross-platform: Works on the web, voice platforms (like Alexa and Google Assistant), and chat platforms (like Facebook Messenger, Instagram, and Google Business Messages).
- Fast: A CLI, local development, and browser-based debugging using the Jovo Debugger.
- Component-based: Build robust experiences based on reusable components.
- Multimodal: An output template engine that translates structured content into voice, text, and visual responses.
- Extensible: Build Framework plugins, CLI plugins, and leverage many integrations from the Jovo Marketplace.
- Integrated: Works with many NLU and CMS services.
- Robust: Includes staging and a unit testing suite.
Learn more here: https://www.jovo.tech/news/jovo-v4
Happy to answer any questions!