Even if you momentarily ignore the reasons why someone thought this could be a good idea, why not do it in one of the pre-releases or betas??? Doesn’t look like the kind of thing you’d want to do in a last minute change.
The real problem in my opinion is the fact that you cannot go back after a macOS upgrade. So if something like this happens, you literally have no option but waiting for Apple to release a fix, if they want to do it at all.
code review: self-reviewed
test plan: this change is so obvious no tests are neededThe good kernel engineers are working on iPhone or Vision Pro, not on MacOS
I do not have very kind words for Apple's dev teams today. Charitably I am trying to think that screwups happen, but this is bad and it is very hard to see how anyone thought merging it into an rc was okay.
Windows was a virus laden mess and was not useful for running Linux apps.
And besides the flaws of the other OS’es, OS X had some of the nicest window management features (Expose from the Snow Leopard is still my favorite window switcher), was a UNIX and had a thriving indie development scene (which was basically killed by iOS…).
Since then OSX has completely languished as a developer platform. It’s not clear what you can do today as a developer to make your life easier that you could not a decade ago on OSX. And in fact, the destruction of the indie dev scene, combined with the many heavy handed security restrictions of dubious benefit have made it a far worse dev environment than a decade and a half ago.
Further, Linux DEs have greatly improved and Windows now supports Linux development.
The Mac ecosystem has seen a complete turnaround where you now buy a Mac for the hardware, not the software.
1. Terminal is very usable, compared to Windows cmd. Modern Gnome Terminal is good, though.
2. Cmnd+C for copy, Ctrl+C for SIGINT.
3. Touch ID instead of root password, which works with Bluetooth keyboard as well, and that's with absolutely minimal configuration, uncommenting single line.
However at home I have always been a Windows/Amiga/UNIX head, with Linux being the cheaper path to that UNIX experience, had Microsoft not messed up the POSIX layer, probably I would never have bothered.
For some time I even tried to acquire one of those nice Toshiba laptops using Solaris that Sun used to have.
I'm always confused by such statements, because what KDE offers on Linux easily dwarfs every window management concept in every major OS. I always need to install additional third-party apps (e.g. Rectangle on macOS) to get a poor-man's equivalent of KDE-style window management functionality.
I often have to re-center the balance, it's driving me nuts.
Your explanation makes a lot more sense as x86 is probably the only time I’m pushing the cpu usage high enough.
The popping is darn annoying
The hardware is not even that good. I presume people like it because it looks slick and serves as a status symbol.
The list of specific annoyances and bugs is likely in the 3 digits by now, and I've only used it for half a year.
The worst of all was getting the M2 soft-bricked by an update, because I had changed the display frame rate to 60Hz, because the tween duration when moving between desktops was for some reason tied to this refresh rate. About 2 second tween duration on 120 Hz until input control, and one second on 60 Hz. Impressive for such a thing to not be picked up by QA.
It's not unusable if you at least have one of those medium density 4K monitors, but it feels like a step backwards if you're used to Windows which still (mostly) supports subpixel font rendering for crisp text at 100% scale, and can render natively at 125/150/175% scales.
I gotta disagree hard here. Macs have by far the most obnoxious and temperamental WiFi stack I've ever experienced. Constant disconnects, have to turn it off and on to get it to bother looking for APs again. All of them constantly trigger bad experience scores in UniFi.
Absolutely subpar compared to any of my Linux devices, even the raspberry pi jammed inside a metal box.
My opinion and experience is the exact opposite. In fact, I switched to Macs BECAUSE of how good macOS was for development and just general work and daily life.
Around 10-12 years ago I got an iPad, my first ever Apple purchase, as a gift for my aunt. I loved how simple and clean iOS was and found the apps and games interesting, so I thought I'd dabble in iOS dev. I was on Windows 8 at the time (and already sick of Microsoft's bs) so I downloaded a VMWare image for Mac OS X Lion.
As the days went by I found myself spending more time in macOS than in Windows, and enjoying it! A month later I bought my first ever MacBook and never looked back.
Well, sometimes I do look back at Windows, in a VM on macOS, just to try some games, and man, it's still a sad joke in 2024.
It's been a long time since I ran a Macbook, but this was my biggest problem. The weird uncanny valley where its almost the same but then not.
WSL has problems but there's a very clear line in the sand between Linux and Windows and you know what you're getting.
Tried updating to latest xcode, learned that my Mac's storage is almost full. Why? iOS simulator images were taking a whopping 40 GB of space even when I didnt target those iOS versions nor tested on those simulator devices. I uninstalled all the images keeping the one I build for. Next tried updating Xcode again, the issue with creating objective C files was fixed. But then it forced me to download the iOS 17.2 again along with tvOS and a bunch of other extra things. Now my space is close to full again. Why Apple? Why do I need iOS 17.2 when I build for 15.4?
I have a $2000 AUD LG monitor that Mac OS just occasionally decides to overdrive (or something) and cause instant but temporary burn in. I'm not the only one - you can find others on Reddit.
While my work Windows laptop might be faster, it's certainly not the one I'm going to pick in a pinch or when I want to travel with just one laptop.
The best mobile configuration I know right now is a Macbook Pro + Parallels. Even with all of its deficiencies.
Are there any good Linux laptops with similar experience as Macbooks when it comes to power management and time from lid opening to usable state?
Macs are finnicky with hardware (but hdmi sucks by definition, they think you bought the cable and monitor to pirate movies and not to do some work).
However the GUI actually works and if you spend a week on windows 10+ you'll remember why people buy Mac OS.
Personally I have a Mac for stuff that requires a GUI and a headless linux box that I ssh into. And I switched to Macs from ... Linux on the desktop.
Edit: docker is shit because they just install a Linux VM and run their Linux stuff in there. Same on Windows I guess.
macOS is just so clunky. It tried to be so smooth all the time but just ends up being annoying.
Is the problematic corporate Mac an M-series Mac or Intel?
M-series have been great in my experience. I did used to get random full system crashes on Intel Macs which haven't happened in a few years on M1/M2.
https://stackoverflow.com/questions/66408996/python-not-foun...
> using a third party second display
Has always worked fine for me
> Docker sucks
This is Apple’s fault how? It also sucks on Windows.
> posix compatibility is technically there but isn’t really useful
What does this mean exactly? Can you find an example where it’s not useful? In my experience most of the command line applications I would want on Linux are easily installable via Brew and I can choose all the same shell environments as Linux/Unix.
> The thing randomly loses network and only rebooting fixes it.
On your machine. Not my experience with any Mac I’ve owned. That isn’t expected or common behavior.
> I reboot my corporate Mac more often
I’m going to guess this is because your IT department sucks. I never reboot except for OS updates.
I'm finding this with software everywhere. Products keep doing the same old stupid shit they did when they were first released. "Refinements" are poorly-designed cruft.
Is there anyone in charge of the OS X experience? There seems to be a lot of resume development - features that can be illustrated with smiling people in a video but don't really work all that well - and not so much interest in the core UX.
I still find it better than Windows, but the gap between what it could be and what it is keeps growing.
The hardware definitely keeps getting better and yet the software keeps getting worse. sigh.
I mean they have even screwed up a nice app like iBooks. I used to use it for reading ePubs all the time, but now I dread opening up one. Lags like crazy. And so many crashes and reboots needed. Keep submitting crash reports but fairly certain that no-one ever reads them.
Yes, remarkably today - the Windows desktop needs less reboots than macOS today. Can anecdotally confirm this with 2 windows PC's, 3 windows laptops and 3 Macbooks in the family.
It has been a pretty frustrating experience at times. Most of the time is _fine_, but the problems after updates, Docker bugs, certain libraries that we cannot install..
On the other hand, it was never perfect with Linux either. But that was expected. And I can say that macOS does not deserbe the reputation it has.
Overall, kind of a mixed bag. There are some very nice aspects to both he hardware and software, but some that are jarring and make me thing "this is not really meant for professional users". Like the atrocious window management (that admittedly can bve fixed with a couple free applicaitons).
One that says don't update mac os to avoid breaking Java. Another that essentially says upgrade macos to latest version within x days else the issue will be escalated.
It is going to be quite a hassle for IT teams across companies to deal with this problem.
As Gale and Evelle bang in through the door. Evelle holds a
shotgun; Gale holds a shotgun in one hand and Nathan Jr. in
his car seat in the other.
GALE
All right you hayseeds, it's a stick-
up! Everbody freeze! Everbody down
on the ground!
Everyone freezes, staring at Gale and Evelle. An Old Hayseed
with his hands in the air speaks up:
HAYSEED
Well which is it young fella? You
want I should freeze or get down on
the ground? Mean to say, iffen I
freeze, I can't rightly drop. And
iffen I drop, I'm a gonna be in
motion. Ya see -
GALE
SHUTUP!
Promptly:
HAYSEED
Yessir.
GALE
Everone down on the ground!
EVELLE
Y'all can just forget that part about
freezin'.
GALE
That is until they get down there.
EVELLE
Y'all hear that?Haha, this article is quite something :D
The Java Applet was removed from the safari browser. That is unrelated to java apps running on the desktop.
> With macOS 14.4, when a thread is operating in the write mode, if a memory access to a protected memory region is attempted, macOS will send the signal SIGKILL instead.
What is bizarre to me is that Oracle relied on receiving SIGSEGV as normal mode of operation. That should have been a hint where things are going, no?
It's useful for other things as well. I've used SIGSEGV to emulate hardware interrupts. Normal execution wouldn't trap and there's no need for tests + branches (= normally no slowdown), but when an interrupt occurs a specific often accessed page is marked unreadable.
> Write attempts to memory that was mapped without write access, or any access to > memory mapped PROT_NONE, shall result in a SIGSEGV signal. > > References to unmapped addresses shall result in a SIGSEGV signal.
How a SIGSEGV can be handled by the program to continue execution normally need some OS specific code. For Linux there's also userfaultfd to suit this need better.
A JVM's use of SIGSEGV might include platform-dependent details for recovery. But for simple application usages (e.g. eliding inlined bounds checks in a performance critical loop operating on an array) longjmp can suffice for recovery. POSIX very carefully defines async-safety and longjmp to permit jumping out of a signal handler and resuming normal execution, provided certain constraints are met, such as that the signal did not interrupt a non-async-signal-safe function.
Not bizarre at all, this how the runtime has always operated, as anyone one who's ever attached a debugger to a Java process knows. The SIGSEGV handler is also responsible to handling NullPointerExceptions IIRC.
> ... the JVM can intercept the resulting SIGSEGV ("Signal: Segmentation Fault"), look at the return address for that signal, and figure out where that access was made in the generated code. Once it figures that bit out, it can then know where to dispatch the control to handle this case — in most cases, throwing NullPointerException or branching somewhere.
https://shipilev.net/jvm/anatomy-quarks/25-implicit-null-che...
This means a workaround is running java with -Djava.compiler=NONE, no?
1. There is very little you can safely do in a signal handler. For a threaded application, that pretty much boils entirely down to setting a bit and leaving it at that. If they did anything more, the behavior is undefined.
2. The memory state that a program receiving a SIGSEGV in is often undefined/garbage, and attempting to execute further at this point is at best unsafe, at worst trampling on state further, continuing execution in a broken state and destroying all evidence that would be useful for debugging - whereas a coredump preserves the state at the time the issue occurs.
There are cases where you need to catch SIGBUS, such as if an anonymous file has been truncated after you mmap'ed it.
The code in question takes into account that the value read might be garbage. See the big comment here: https://github.com/openjdk/jdk/commit/29397d29baac3b29083b1b...
On current CPUs and operating systems, this is not an optimization, so the code was removed earlier this year: https://bugs.openjdk.org/browse/JDK-8320317
You can actually do pretty much anything you want, it's just the C library that uses a lot of global state and internal memory allocations, which messes things up. The core syscall API and any reentrant code you write yourself are not affected.
>The memory state that a program receiving a SIGSEGV in is often undefined/garbage
That may be true for arbitrary segfaults caused by bugs, but the JIT has 100% control over what instructions to emit, it is not restricted by ABIs or platform-specific issues, so there is no problem to use SEGV as a signaling mechanism.
It's mostly fine, though. The crashes are rare, and since everything auto-saves, you're not really losing anything. It's just an "oh, okay." moment.
Obviously it'll be good when it's fixed, but on my personal list of impactful bugs, this doesn't crack the top 10.
News like these are the major reason why I apply updates only after long periods of waiting if anything blows up for others. Why companies use their userbase as testers?
But then you are accepting that you are running an exploitable OS since you are lacking the latest security fixes. Not sure if that‘s an acceptable tradeoff.
As other posters said: macOS might have had an edge over Windows and Linux before but that's no longer the case for a few years now. I'll definitely be looking for ways to use 5K display with my Linux laptop and will likely make a full transition to Linux in the next year or two.
Macs have amazing displays. So I'll use mine as thin clients I suppose. My eyes are happier with an Apple display so I'll use them for that alone.
Apple can still turn this around but their bogus security claims that serve mostly to annoy devs is them shooting themselves in the foot and making themselves a very uncomfortable bed to sleep in just some very short years in the future. Hope somebody at HQ understands that and is able to see the problem before too many people leave.
I suggest the Oracle blog as an alternative.
I thought it was clear, but I have replaced the "this" in my comment anyway.
This is why most enterprise workplace tech teams don’t roll out any OS level updates immediately. Regardless of whether they are on windows or macOS. Also a good idea to disable automatic updates on all devices that you use daily.
This is misleading. What was deprecated was the browser Java plug-in distributed by Apple. That’s very different from “deprecating Java”.
They basically bamboozled us with fancy wallpapers and gave us this immensely substandard software.
macOS Sonoma 14.4 introduces new emoji as well as other features, bug fixes and security updates for your Mac.
Emoji
• New mushroom, phoenix, lime, broken chain and shaking heads emoji are now available in emoji keyboard • 18 people and body emoji support facing the opposite direction
This update also includes the following improvements and bug fixes:
• Podcasts Episode text can be read in full, searched for a word or phrase, clicked to play from a specific point, and used with accessibility features such as Text Size, Increase Contrast and VoiceOver • Safari Favourites Bar adds an option to show only icons for websites