It does!
It does!
Getting Keycloak and Headscale working together.
But I did it after three weeks.
I captured my efforts in a set of interdependent Ansible roles so I never have to do it again.
I still have it on my Pixel 8 Pro. It requires a double tap to occur in less than 300 milliseconds.
This bug has been the bane of my existence for almost four years now: https://issuetracker.google.com/issues/204650736
The Android version of the app still has the zoom/cursor offset bug when using a software keyboard from when they sunset RDP 8. That has been a severe usability bug for over three years now.
CSV only exports data, not formulas. I don’t really consider it a proper spreadsheet interchange format.
Gestures are not better for me and my situation. Please stop suggesting that I work against my better interests.
They are objectively slower and less precise, just to start with.
They still haven’t fixed the task switch button from the three button layout becoming non-functional after four years: https://issuetracker.google.com/issues/204650736
It’s a byproduct of the home and task switch button now being managed by the Pixel Launcher regardless of which launcher you use. The animation delay makes it so the button becomes inactive and won’t be made active again until the Pixel Launcher is killed or the phone restarted.
From my perspective, Google is losing interest in maintaining Android at all.
Thank you! I was struggling to remember the proposal name.
Google was working on a feature that would do just that, but I can’t recall the name of it.
They backed down for now due to public outcry, but I expect they’re just biding their time.
Not with this announcement, but it was.
It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.
I’m also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.
Pixel Experience is unfortunately dead now. 🙁
Unfortunately, I don’t expect it to remain free forever.
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
Or maybe just let me focus on who I choose to follow? I’m not there for content discovery, though I know that’s why most people are.
I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn’t have to release anything concerning LLaMA. They’re practically the only reason we have viable open source weights/models and an engine.
That’s the funny thing about UI/UX - sometimes changing non-functional colors can hurt things.
They do, actually!