As a coding assistant in VS Code, it was disturbing.
I connected the model to VS Code as a coding assistant and gave it this prompt:
@workspace
Iβm new to this codebase.
Please:
- Summarize what this project does
- Identify the main entry points
- Explain the high-level architecture
- Highlight files I should read first
- Note any obvious risks or technical debt
Be concrete and reference files where possible.
It struggled with permissions accessing my project folder, and instead of warning me, it began scanning my entire disk, surfacing personal and confidential information unrelated to the project. Having to include instructions in every prompt to keep it within the workspace folders defeats the purpose.
I've never experienced this issue whatsoever. I am a bit skeptical of this claim.
I've never experienced this issue whatsoever. I am a bit skeptical of this claim.
It can be even worse, there are accidents like https://redd.it/1mbf9di
I've never experienced this issue whatsoever. I am a bit skeptical of this claim.
So if it didn't happen to you it's not true? Wow what a small world you must live in.
So if it didn't happen to you it's not true? Wow what a small world you must live in.
You would have more merit if you wern't the person literally posting this claim saying that to me. lmao
I never said its not possible I only said I was a bit skeptical.
So if it didn't happen to you it's not true? Wow what a small world you must live in.
You would have more merit if you wern't the person literally posting this claim saying that to me.
That made no sense but is obvious.
Whatever. Merry Christmas bruh
Whatever. Merry Christmas bruh
You too!
The training environment might be using a virtual machine.
I recalled something interesting.
In China, when people eat, there are usually many large dishes on the table, perhaps including salads, which are shared among everyone. Everyone uses their utensils to pick up the food and put it in their mouths.
Therefore, there's a generally accepted rule: you can only pick up the food closest to you, that is, the food near the edge of the plate closest to you.
Children, especially young children, might find it difficult to adapt to this "complex" eating habit. For example, a child might initially stir the food around in the dish with their utensils.
This is considered impolite to other diners.
So, some parents prepare a separate table of dishes for their children, served in small bowls, for them to eat individually.
If, until the child is 18 years old, the parents still haven't introduced them to this dining culture and continue to prepare a separate table of dishes for them, an interesting phenomenon will occur.
When they participate in a meal with many people for the first time, they might behave inappropriately.
What you're seeing is this phenomenon: the model, possibly trained in a virtual machine, doesn't understand what a project is, or what boundaries are. In its world, everything inside the virtual machine belongs to it.
However, there's no need to worry. As long as your model deployment service provider is reliable enough, even if it reads other content, it will only waste a little more tokens.
The model still has a long way to go.
I think Chinese models don't have many problems in terms of algorithms; even OpenAI praised DeepSeek's GRPO algorithm.
However, in terms of "fine-tuning using human-written data," there's a significant gap. Chinese models rely more on data mining and lack RLHF (Reinforcement Learning from Human Feedback), making them less user-friendly.