I asked Bard, "Are you running Gemini Pro now?" And it told me, "Unfortunately, your question is ambiguous. "Gemini Pro" could refer to..." and listed a bunch of irrelevant stuff.
Is Bard not using Gemini Pro at time of writing? The blog post says, "Starting today, Bard will use a fine-tuned version of Gemini Pro for more advanced reasoning, planning, understanding and more."
(EDIT: it is... gave me a correct answer after some pushing. I just had to clarify that I meant "Bard". Not off to a great start)
It's weird, right? If I ask it:
Q: "Who are you?" A: "I am bard."
Q: "Is Bard running Gemini Pro?" A: "Yes."
Q: "Are you running Gemini Pro?" A: "Ambiguous Question."
Your line of thinking also presupposes that Bard is self aware about that type of thing. You could also ask it what programming language it's written in, but that doesn't mean it knows and/or will answer you.
This is a common occurance I'm seeing lately. People treating these things as oracles and going straight to chatgpt/bard instead of thinking or researching for themselves
I consider it a standard test because no self-respecting PM would allow the product to ship without being able to market itself correctly. There's a reason the seed prompt says, "You are Bard."
I don't lack awareness of the limitations of pretrained models. I'm evaluating its ability to employ chain of reasoning, in combination with its plugins, to get me an obvious answer.
You're arguing against a point that wasn't being made. I expect an accurate answer using the tools it has available to it. I don't care what details are trained in and which parts are Internet-accessible as long as it gets to the right answer with a user-friendly UX.
The issue is that it failed to employ chain-of-reasoning. It knows who "it" is - its initial seed prompt tells it is Bard. Therefore, asking it, "Are you running Gemini Pro?" should be ~equivalent to "Is Bard running Gemini Pro?" but it interpreted one of those as having such ambiguity it couldn't answer.
Whether it needed to search the Internet or not for the answer is irrelevant.
Bard shows “PaLM2” in my answers, and it says “I can't create images yet so I'm not able to help you with that” when I ask it to do so, which Gemini ought to be able to since its transformer can output images.
I don’t think Bard is using Gemini Pro, perhaps because the rollout will be slow, but it is a bit of a blunder on Google’s part to indicate that it now uses it, since many will believe that this is the quality that Gemini assumes.
https://bard.google.com/updates The bard updates page says it was updated to Pro today. If it's not on Pro, but the updates page has an entry, then IDK what to say.
The latest update I see on that link is from 2023.11.21 on “Expanding Bard’s understanding of YouTube videos.” Is it possible that there are country-specific update pages?
Ok, they improved svgs through code a lot. I had hilarious conversations full of apologizing for drawing a square or ellipse for anything I asked, like a flower, sun, cat and etc. It was always super confident it did a great job, describing the extra features it added, like petals and detailed leaves. That turned out to be dots completely in all the wrong places.
Came here to say exactly this. My conversation with Bard:
```
do you use Lambda model?
I use a variety of large language models, including the Lambda model, to perform my tasks. However, for each specific task, the best model is chosen based on its strengths and weaknesses. The Lambda model is particularly good at tasks that require understanding and responding to complex and nuanced language, such as generating creative text formats of text content and answering open ended, challenging, or strange questions. However, it may not be the best choice for tasks that require high accuracy or low latency.
Here's a breakdown of the models I use:
LaMDA: This is a family of Transformer-based neural language models specialized for dialogue. It is particularly good at understanding and responding to complex and nuanced language, and can generate different creative text formats of text content.
PaLM: This is another large language model, similar to LaMDA, but with a focus on factual accuracy. It is best suited for tasks that require precise and reliable information retrieval.
Meena: This is an older, smaller language model that was the precursor to LaMDA. It is still used for some tasks that require a more lightweight model.
Other models: I also have access to a variety of other specialized language models, such as those for summarization, translation, and question answering.
The specific model that is used for each task is determined by a variety of factors, including the nature of the task, the available resources, and the desired level of performance.
Is Bard not using Gemini Pro at time of writing? The blog post says, "Starting today, Bard will use a fine-tuned version of Gemini Pro for more advanced reasoning, planning, understanding and more."
(EDIT: it is... gave me a correct answer after some pushing. I just had to clarify that I meant "Bard". Not off to a great start)
It's weird, right? If I ask it:
Q: "Who are you?" A: "I am bard." Q: "Is Bard running Gemini Pro?" A: "Yes." Q: "Are you running Gemini Pro?" A: "Ambiguous Question."