I've been working on a way to annotate in multiple views, with feedback for inconsistencies across views. There's still some way to go, but already it feels so cool after years of labeling only 2D views!
#neuroscience #annotation #PoseEstimation
I've been working on a way to annotate in multiple views, with feedback for inconsistencies across views. There's still some way to go, but already it feels so cool after years of labeling only 2D views!
#neuroscience #annotation #PoseEstimation
Anyone here working on #PoseEstimation #animalbehavior #neuroethology ?
You may be interested in the free and open-source Python package I’m currently working on, together with @adamltyson @bd_peri and others.
It’s called movement, and it’s made for analysing the pose tracks produced by pose estimation frameworks, like #DeepLabCut and #SLEAP.
Website: https://movement.neuroinformatics.dev
GitHub: https://github.com/neuroinformatics-unit/movement
It’s still in early development but we appreciate feedback/feature requests.
Check out the detailed thread on our team’s mastodon account: @neuroinformatics
First masto post! Time to stop lurking in the background
Anyone here knows some good resources on video compression? How the various codecs work, what are the relevant tradeoffs etc.
For context, I have some grasp on image compression and the fundamentals of digital signal processing, but video compression has so far eluded me.
I’ve found this blogpost helpful http://blog.loopbio.com/video-io-1-introduction.html, so any posts/papers/courses in that vain would be highly appreciated.
We launched our project page for 3D-MuPPET https://alexhang212.github.io/3D-MuPPET/.
A framework to estimate and track 3D poses of up to 10 #pigeons at interactive speed. We show that 3D-MuPPET also works in natural environments without model fine-tuning on additional annotations.
#MuPPET
#PoseEstimation
#3dpose
#tracking
#computervision
#collectivebehaviour
#UniKonstanz
#CBehav
#cv4animals
Check out our latest pre-print on #arXiv https://sciencecast.org/casts/8g1z2dov39ms
We present 3D-MuPPET, a framework to estimate and track 3D poses of up to 10 #pigeons at interactive speed. We show that 3D-MuPPET also works in natural environments without model fine-tuning on additional annotations.
Our paper Neural Puppeteer https://urs-waldmann.github.io/NePu/ was accepted in the Nactar Track at #GCPR23 https://www.dagm-gcpr.de/year/2023.
See you all in #Heidelberg.
What is the latest research in quantifying animal movement and behavior?
I wrote down a detailed overview here, based on what I saw at CVPR this year:
https://writings.lambdaloop.com/posts/cv4animals-2023/
Poster presentation (AM session) at #CVPRW2023 "CV4Animals" at #CVPR2023 in #Vancouver
Poster presentation (AM session) at #CVPRW2023 "CV4Animals" at #CVPR2023 in #Vancouver
Our annotated single #pigeon #data with RGB images and corresponding #keypoint #annotations is available now.
https://zenodo.org/record/7989831#.ZHcwO17P0uU
You can check it out with I-MuPPET.
We provide pre-trained weights for #pigeons to use with I-MuPPET. We hope that this will help to boost the study of #collectivebehaviour.
https://zenodo.org/record/7037589#.ZHcsFl7P0uU
https://urs-waldmann.github.io/i-muppet/
Our latest work "Neural Puppeteer" is published at https://link.springer.com/chapter/10.1007/978-3-031-26316-3_15.
We estimate 3D keypoints from multi-view silhouettes only, using our inverse neural rendering pipeline. In this way our 3D keypoint estimation is robust against transformations that leave silhouettes unchanged like texture and lighting.
Getting really jazzed for #SICB2023 as I'm working on my talk. I have no idea if anyone else is working with #PoseEstimation in a more eco-evo space, but I'm really excited by its potential for high throughput behavior analysis in a totally species-agnostic context. If anyone else playing with this technique is going to SICB this year, please hit me up so we can have lunch or chat!
Okay, #introduction time.
I'm in the middle of a weird disciplinary leap right now but #neuroethology might be the right name. I did my PhD at UT Austin on #EnergyBalance and social context as factors in decisions about whether to signal in singing mice, with a pretty standard #BehavioralEcology perspective; currently, I'm working with #SLEAP and #PoseEstimation on mouse models of neurodivergence in the Grissom lab at UMN, experimenting with #BigData large scale behavioral collection.