Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open to Collab
14
3
20
AbstractPhila
PRO
AbstractPhil
Follow
Winnougan's profile picture
Fishtiks's profile picture
Henry35427698's profile picture
79 followers
·
100 following
https://civitai.com/user/AbstractPhila
AbstractEyes
AI & ML interests
datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.
Recent Activity
updated
a model
about 1 hour ago
AbstractPhil/geolip-vit-base-x3
published
a model
about 1 hour ago
AbstractPhil/geolip-vit-base-x3
replied
to
their
post
about 2 hours ago
geolip-vit-x34 - 34 expert vit. I can't train an extended version of 34 vits, but I can definitely run some experiments and make some starter weights with an anchor. That would yield a substantial amount of data. https://huggingface.co/datasets/AbstractPhil/bulk-coco-features This... is going to be a odd one to describe. Based on the research with Bert, creating a uniformed patchwork using a multitude of vit composites will be very achievable. It shouldn't be soup, which is really hard to explain, but by creating a second geometric anchor, the system will align in a way that I could never predict without many more model analysis and must test. I simply didn't test all these vits for geometry, so this will be the test. This is essentially 34 directly extracted views of coco, which is already prepared feature data. With this data, we have 34 experts that can distill into a single unified vit. I'm hesitant to even call this distillation anymore, it's more interpolative data alignment, and it's absurdly retentive. ADDITIONALLY, we can anchor to frozen geolip-bert and create cross-contrast between the anchors for a learned anchor median, which will allow further integrations directly into the geometric core. This will require a few overlapping internal mechanisms to guarantee vit differentiation, however I believe the full unified patchwork will be... different from what is currently known as a vit. geolip-bert-vit will likely be cooking within the month. The alignment statistics say it will be... 100% accurate to the specifications. I CAN prepare 34 vits worth of imagenet, but I would need probably 34 vits worth of laion aesthetics, which is substantially more than I currently have. In the process I would need to ensure everything isn't corrupt, and the captions are correctly synthesized in our expert student bert with the correct anchoring rotation. Probably 3 vits is enough for the full version prototype, 34 vits for the bulk experiment.
View all activity
Organizations
AbstractPhil
's models
140
Sort:Â Recently updated
AbstractPhil/T5-Small-Human-Attentive-Try2-Pass3
60.5M
•
Updated
May 20, 2025
•
1
AbstractPhil/T5-Small-Human-Attentive-Try2-Pass2
60.5M
•
Updated
May 19, 2025
AbstractPhil/T5-Small-Human-Attentive-Try2
60.5M
•
Updated
May 19, 2025
AbstractPhil/T5-Small-Human-Attentive
60.5M
•
Updated
May 18, 2025
•
8
AbstractPhil/SD15-Surge-V1
Updated
May 3, 2025
•
1
AbstractPhil/Liminal-Full
Updated
Apr 22, 2025
•
3
AbstractPhil/omega-vit-l-reformed-fp32
0.4B
•
Updated
Apr 17, 2025
•
1
AbstractPhil/SD35-SIM-V1
Updated
Apr 16, 2025
•
4
AbstractPhil/t5xxl-unchained
Updated
Apr 7, 2025
•
7
•
4
AbstractPhil/SIM-OMEGA-PUBLIC-1
Updated
Apr 6, 2025
•
3
AbstractPhil/Beatrix
Updated
Apr 5, 2025
AbstractPhil/omega-vit-g-reformed
Updated
Apr 5, 2025
AbstractPhil/OMEGA-BIGASP
Updated
Apr 2, 2025
•
3
AbstractPhil/PONY-SIM-V4
Updated
Mar 28, 2025
•
1
AbstractPhil/SIM-V5
Updated
Mar 27, 2025
•
1
AbstractPhil/SDXL-SIM-REFINER
Updated
Mar 16, 2025
AbstractPhil/SDXL-SIM_NAI-VPRED
Updated
Mar 16, 2025
AbstractPhil/SDXL-Simulacrum-V3-1
0.2B
•
Updated
Mar 3, 2025
AbstractPhil/sdxl-interpolated
Text-to-Image
•
Updated
Feb 10, 2025
AbstractPhil/sdxl-interpolated-nai-xl-11
Text-to-Image
•
Updated
Feb 9, 2025
Previous
1
...
3
4
5
Next