We recently upgraded from 3.1 to 6, which was a total non-event. The code base that was around for Core 2.x is still the same one we have today. Some substantial changes made to the web interfaces and hosting, but nothing in the business logic or data models was impacted.
We currently use Self-Contained Deployments to Windows machines, but there are only 2 minor methods stopping us from using a Linux image as well. Looking for a really good reason to make the jump, but I can't justify it to the business yet.
Not having to pay for Windows Server licences?
Better start-up performance, likely better overall performance, less resource usage.
Assuming these are specific Windows calls, moving to something more cross-platform you could open development to colleagues with Mac or Linux machines (via VS Mac, JetBrains Rider or VS Code) if there are any.
Options to use Kubernetes or deploy to container services that run on Linux
A few reasons.
Chances are that those licenses are paid for already so this is a sunk cost.
> Options to use Kubernetes or deploy to container services that run on Linux
I think this would be a hard sell - "hey, let us migrate platforms so we can rewrite our entire production infrastructure "
The many different versions were quite confusing. I am happy that they are now unified.
If you continue to use .NET Framework 4.8, your software will receive security updates until the heat death of the universe, too much, including Windows itself, depends on it. (Same, hilariously, with Visual Basic 6.) Microsoft's lifecycle for new runtimes is so short you're better off using a dead one.
.NET Core has too many caveats and too few selling points to get any reasonably competent desktop developers to move over, IMHO, especially considering the fact that every executable you ship to customers has a Microsoft imposed support death date.
There are significant performance improvements, and the new language features of C# depend on the later versions - that might not be such a big deal, but it will become more acute as the language improves.
I'm a Java dev, but I have also developed on .Net and running modern .Net on Linux is about as simple as it get.
We haven’t run into a customer that wants us to host for them yet (and probably won’t ever in our market of banking).
Just one thought worth looking at.
Every client gets 256 bits of entropy cookies which uniquely identify their “device”. The server will then deal with this as required to guide auth flows.
Good to hear as we need to take this step soon. The bigger hurdle for us is a number of production apps in DNF 4.x that also leverage DevExpress (god knows why). Incredibly costly to stand up in EC2 instances and the ROI just isn't there.
As developer and systems engineer you should have been aware of that before starting your work. Now you are chained to the Windoes world and real computing environments are out of reach for your project.
https://isdotnetopen.com/ goes over these in more detail, but I'd rather use an ecosystem where I don't have to worry things will be progressively locked down in the future.
It seems like they spooked the community with this VSCode plugin thing but it also feels like not that big a deal. MS wants VSCode to interface with things they can sell along with the open source. Is that so bad? I get the worry that the MS support of OSS could dry up but that could be said for all kinds of OSS projects with corpo backers, no?
It's not strictly unreasonable for MS to offer a proprietary debugger, but it is bass-ackwards for the core .NET team to not offer an open source debugger or open standards compliant debugger interface to go with their open source language.
Providing the basics as open source and then charging a little for some extras should not be the end of the world. You see the same in the Jetbrains ecosystem where e.g. the community edition of Intellij is free and open source but they have a paid version with a lot of extras.
Paid products for developers are not as common as they used to be but we are talking about tools that improve productivity a lot for people that make a lot of money using those tools. Developers (my self included) are oddly stingy when it comes to spending on software but at the same time think nothing of blowing a few thousand dollars on hardware.
And that one isn't usable outside of Rider either.
1) Don't trust Go because of Google
2) Don't trust C#,F# because M$ (yes with dollar sign)
3) Don't trust Java because Oracle
4) You gotta use XXX tech, which has no jobs in your area
Yada Yada... yet is probably using VSCode or GitHub. Just ignore it.
https://www.reddit.com/r/Minecraft/comments/vjpz2w/ingame_ch...
Fuck Microsoft.
We still have copilot case in front of us.
There is a good reason projects like the Rust Language are dual-licensed under MIT AND Apache2 because only the later protects you from software patent claims.
So I wouldn't be so sure about MS not having any legal means to shut down forks. Even if they might not win in court, as long as they can make up something up, they still have enough legal war money to effectively shut down any unwanted fork just by threatening legal action.
Or please elaborate a little what you mean by "betraying".
Obviously, I have no idea what exactly will MS do, but after reaching some adoption target, MS will try to extract as much money as it can, because that's what they do.
The other posters in this thread mentioned the debugger is still proprietary. Anyway, I'm not touching this.
Microsoft is probably the most aggressively mediocre company in history, and the poor quality of everything has extracted an enormous lost productivity cost. Those of you old enough to remember IIS and early internet Microsoft know.
Even today, it takes five full seconds for my work Outlook webmail to load. Every person in the organization waiting for each task in Outlook n times per day adds up to a lot of lost productivity. And Outlook adds nothing to email that we haven't had since forever. Microsoft just took it over and made it slow and convinced the corporate types to use it. Rinse and repeat for everything they do.
I picked up and used XNA for the global gamejam 2012. It wasn't a terrible experience, so I went to use it again for 2013 but they'd since killed it off.
C#
JavaScript/TypeScript
Python
Edit: golang is small in terms usage but is on the way up definitely
All the rest are in decline or remaining at a constant level of acceptance.
So if hiring / recruiting is important, use C# or TypeScript or Python.
If you want to cause yourself deep hiring/recruiting pain, build your systems with Ruby or use lesser known frameworks and languages such as Erlang/Elixir.
C# seems like an excellent choice for Linux based development - it's mature, there's a vast talent pool, theres vast knowledge and community resource for getting problems and questions answered.
It will always be big and always be around, but it's losing popularity.
I think at this point my last real major gripe with .NET is Visual Studio. I've used it on and off over the years, but every time I get back into it, it's just so much mental overhead to try and understand how the hell things are organized. It's like jumping into an ice bath coming from JetBrains/VS Code land. Just really unpleasant, but I don't think I'll be able to get away from it with this older C# code base.
Somewhat unrelated. I really wish Microsoft would improve the install/packaging story for their C++ build tools on Windows. Trying to guide your IT department on how to install specific compiler versions to get TensorFlow/PyTorch and more importantly other less popular Python packages that require you to build from scratch on Windows has been a nightmare for so long. It's one of the things I mostly singularly dread dealing with. I really wish we could just enable the dev/compiler tool chain support without requiring admin credentials and have it be fully automated instead of trawling around the nightmarish Microsoft downloads site.
For older applications, I would consider Mono or WINE if you really want to run it on Linux. We are experiencing the same scenario where we have an application that was built on .NET 4 over 10 years ago and there's a couple of packages we rely on that will not work on .NET 5/6. We have a rebuild planned, but it will be a large overhaul.
I don't understand - what's stopping you from using JetBrains Rider? I use it and I work on some pretty ancient .NET projects from time to time.
As an individual, I only have a little bit of time to tinker with personal projects, and while I like C#/.NET so far, I'm not yet at the point where I feel comfortable paying for Rider's subscription. So I'm also mostly using VS2022 Community and VS Code (on macOS) so far. Hoping to eventually get to a point where paying for Rider becomes a no brainer.
Not sure if this is what you mean, but vs_installer.exe supports command line arguments like --add, so you should be able to craft a giant command line for IT to copy. Used it for work for the NodeJS equivalent of your use case.
From the installer GUI on a machine with the correct set of components installed, click the “More ▼” button, then choose “Export Configuration” and follow the resulting prompts.
Alternatively, the installer’s “export” command-line operation does the same thing.
To use the resulting config file, either supply its path using the “--config” option to the installer’s “install” and “modify” command-line operations, or apply it from the GUI by clicking “More ▼” and choosing “Import Configuration”.
This is also handy in non-IT deployments to quickly replicate an existing VS install on a new machine.
Incidentally, even if you are going to go with the “giant command line” approach, the export mechanism is an easy way to figure out a valid set of component IDs to supply on said command line, though it doesn’t necessarily generate the smallest set of component IDs able to produce a given configuration, due to the presence of workload IDs and the fact that component dependencies are automatically installed, even if unspecified.
We are fully remote with flexible hours and offer 4 day work weeks.
The following packages have unmet dependencies.
libc++1-14 : Depends: libunwind-14 (>= 1:14.0.0) but it is not installable
libc++abi1-14 : Depends: libunwind-14 (>= 1:14.0.0) but it is not installable
E: Unable to correct problems, you have held broken packages.See eg https://www.debian.org/doc//debian-policy/ch-sharedlibs.html which says
> The run-time shared library must be placed in a package whose name changes whenever the SONAME of the shared library changes. This allows several versions of the shared library to be installed at the same time, allowing installation of the new version of the shared library without immediately breaking binaries that depend on the old version.
In this libunwind case the situation is more complicated, the its different versions have conflicting subdependencies.
Just thinking from a Ubuntu package level, where libraries live in packages separate from executables, it might be possible to create library packages using the result of ‘dotnet store’. Then executable packages could reference them when running ‘dotnet publish’. [2] That way multiple executable packages could share common libraries.
Replicating the packages of a language package manager (Nugget in this case) into a system package manager (apt in this case) is probably not fun. You will end up with dependency nightmares or one system package per version of Nugget package. So maybe it’s not worth doing.
[1]: https://ubuntu.com/blog/install-dotnet-on-ubuntu
[2]: https://docs.microsoft.com/en-us/dotnet/core/deploying/runti...
The first part seems to add confidence that if the right apps/developers came along, Canonical themselves likely wouldn't have a problem putting them in the distribution. The second part would explain why there aren't just apps waiting in the wings for it and why "the community" (or at least certain vocal parts) may still not be as welcoming of .NET-based apps in the distribution this time around like it played out that previous time.
From a technical perspective it's pretty much a solved problem for any language ecosystem whose dependency management is reproducible, is uniform enough to support basic automation, and supports some measure of vendorization.
We are starting to work through it here: https://github.com/dotnet/source-build/discussions/2960. Any advice/tips/contributions would be welcome, I think.
I'm not sure going back to shared runtime libs is worth the headache but I guess if you were really resource constrained but running .NET it would work for you.
(And where the hell is support for .NET Core on FreeBSD???)
Also notice that our default build instructions are for Fedora: https://github.com/dotnet/installer#building.
I'm the post author.
About 2 months ago they crushed hopes that official FreeBSD support will happen anytime soon by clearly stating that at this point, updating their Quality Assurance process to support FreeBSD would not make business sense for them as they have to put together an extensive test suit and manual testing procedures for every OS. Microsoft folks remain friendly and helpful in the thread though.
The FreeBSD ports system insists on requiring third party applications to be able to be built from source offline, while the .NET build process downloads packages left and right at various parts of the process. This is one of the things that complicates its inclusion in the ports tree. Building .NET for FreeBSD is close to rocket science, the community has worked to make the process way simpler. It is frustrating that the .NET team refuses to take things from there. Providing FreeBSD support is what I would expect from them to strengthen their cross-platform posture. This is the last major platform they need to support, and one being heavily used for servers.
As someone who is not a part of the FreeBSD community, I just wanted to chime in to say that the FreeBSD Ports people are absolutely correct here, and that the build time behavior you describe is Bad Behavior™ that is likely to get in the way of any distribution (Linux or FreeBSD or macOS or anything else) that follows basic best practices with respect to build sandboxing on their CI/CD systems, build clusters, etc., even if it can eventually be hacked around.
Building software outside of a sandbox with restricted network access is how you get lovely exports like credential scrapers in your setup.py or your NPM install hooks, perhaps running as root. Downloading your dependencies at build time from within the build system without first emitting a manifest with hashsums of what you intend to download is a huge problem for reproducibility, too.
That would make a lot of sense, despite the backlash it would generate.
Remember when they acquired GitHub, developers were vouching to move over to GitLab. Yet very few did.
The same would probably happen if they acquire Ubuntu. Developers would vouch to move to another distribution, yet few will, because Ubuntu is so much nicer to use than the competition.
I'll never trust Microsoft again after all they did in the 2000s. Transitively, I'll never trust any comparably sized company.
GitHub benefits from a powerful network effect: if you want contributions from the greatest number of developers, you need to be on the platform they use and understand.
Ubuntu doesn't have a comparable form of social lock-in. As long as they are popular and until portable and/or containerized package formats for Linux mature, they do exert some network pressure on publishers, but not really end users.
> Ubuntu is so much nicer to use than the competition.
I don't think this has been true for some time. Canonical's server offerings are pretty much coasting on the mindshare that they gained with current-gen Linux sysadmins and developers who grew up experimenting with desktop Linux in the aughts, when the usability delta between Ubuntu and other mainstream distros really was vast.
They still benefit quite a bit from their willingness to bundle proprietary software with the OS, and from sheer inertia on the desktop, where they are still most likely to receive native packages by proprietary software vendors.
But the collection of desktop operating systems that actually attract new users to the ecosystem who will become the next generations of Linux sysadmins is increasingly comprised of distros that are not based on Ubuntu. User-friendly Arch downstreams now have more and more of theml mindshare that Ubuntu did when I was 'growing up.
The most popular Ubuntu-based distribution, which has actually surpassed Ubuntu both in interest from new users and in its reputation for OOTB usability, deviates strongly from Ubuntu in some core technical aspects that make some Ubuntu knowledge non-transferable.
APT is aging poorly, even compared to its RPM-based counterparts. It has recently been at the center of some high profile blowups where package installation triggers a catastrophic cascade of uninstallations that newbies would likely perceive as 'bricking' their installations. This has damaged the reputation of the most popular Ubuntu-based distros, including Ubuntu itself as well at driving home the case for portable/containerized package formats.
The success and growth of these portable package formats, including Snap itself, additionally threaten to undermine Ubuntu's strategic advantage because they work pretty much as well on any distro as they do on Ubuntu.
Moreover, Canonical's own offering in that space is already driving users away from Ubuntu. Snap is slow, clunky, space hungry, and bandwidth hungry, and the way that Ubuntu has chosen to force Snap packages for key software, e.g., Firefox, has had a negative impact even (and perhaps especially) among non-technical users precisely because it has a shitty UX.
Meanwhile Snap has failed to gain much developer interest outside of Canonical, and looks likely to suffer many of the same adoption problems as previous Canonical offerings in the space of core system software like Upstart and Mir. It seems that Canonical has not figured out a way, in the past decade or so, to displace Red Hat as the most influential corporation on projects that require widespread, cross-distro adoption to succeed.
Similarly to the situation witb portable packaging formats, there's a pretty clear trend in the wider Linux world away from relying on old-school package management 'raw' in favor of immutable operating system images. There are Ubuntu derivatives in this space, but the clear leaders are NixOS on the radical side and Fedora Silverblue for the more conservative approach that reinforces/enhances traditional package management tools with OSTree to gain some of the same benefits.
It may be that Canonical's advantage in the server space and among developers is 'sticky', now that Ubuntu is established in those markets, and that it's less ripe for the same kind of play that Ubuntu made by building mindshare first on the desktop. They may also catch up in some of these areas.
But for all of the reasons I outlined above, I don't think Canonical is in nearly as strong a position to retain its relevance as GitHub was, especially in the long term, in the face of one more thing that makes it less attractive to new desktop users for whom much of the appeal of running Linux in the first place is escape from Microsoft.
Why? There is nothing "Enterprisey" about .NET Core. It's a perfect fit for both fast-moving startups and big enterprises.
If "Canonical and Microsoft are committed to working together", probably the former.
[1] https://docs.microsoft.com/en-us/dotnet/core/tools/telemetry
- Alpine: https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/c...
- Arch Linux: https://github.com/archlinux/svntogit-community/blob/package...
- Fedora: https://src.fedoraproject.org/rpms/dotnet3.1/blob/f36/f/cli-...
- RHEL: https://gitlab.com/redhat/centos-stream/rpms/dotnet6.0/-/blo...
- Ubuntu: http://archive.ubuntu.com/ubuntu/pool/universe/d/dotnet6/dot...
The homebrew build of .NET is the only non-Microsoft build of .NET that seems to keep it enabled (https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/...).
I am looking at the sources at https://packages.ubuntu.com/source/jammy-updates/dotnet6 and (as far as I can tell) they are applying a patch called 1500sdk-telemetry-optout.patch (originally from Fedora https://src.fedoraproject.org/rpms/dotnet3.1/blob/f36/f/cli-...) that is supposed to make telemetry opt-in.
There's a better fix landing upstream for .NET 7: https://github.com/dotnet/sdk/pull/25935
Pretty cool and interesting that a big linux vendor is on board with .net.
That being said, I've used .Net and C# for over a decade and it's frankly just a clunky language and toolset. It's gotten better with Core (now just.Net), but it's still just not there. It's not a secret that we've been doing more and more TypeScript where I work, but I recently did a little bit of Go and Gorm made EntityFramework feel like something from the stoneage.
So I'm not sure I'd really recommend it unless you already do a lot of .Net. It's not that it's bad, it's just that it's a dated way to build things in a world where the Java way of doing things makes less and less sense.
Once I disable Entity Framework object cache globaly (AsNoTracking), it becomes close to the ideal db toolkit for me.
I see a lot of strings in Gorm docs. Those are probably not checked during compilation. This alone differentiates Gorm and EF quite a bit. https://gorm.io/docs/create.html
Not wanting to get in a debate on whether you should use an ORM, but I just don't see anything really special about Gorm that makes the other frameworks "ancient".
When I look at this page: https://gorm.io/docs/advanced_query.html I see the same exact crap that makes me wary of JPA Criteria API.
[0]: https://github.com/bflattened/bflat
I am not a vs/c#/.net dev but I tried it once on a windows machine and thought it integrated well and seemed to be a really nice DX. Probably the nicest DX for an "enterprise" stack.
Other than that, it is smooth and in many advanced scenarios Linux has better support than macOS, unfortunately (still decent, just has quirks).
Also, check out Native AOT in the upcoming .NET 7 because it is really nice for building small and lightweight CLI utilities in .NET and just shipping them to users like you would if they were written in C++ or Rust.
That seems untrue. On the Cloud side every .Net shop I know from startups to government/enterprise are now deploying to Linux or Linux containers.
On the developer side 75% of .Net devs are on Visual Studio on Windows but a big and fast growing segment are using JetBrains Rider on Mac/Ubuntu.
Not everyone who uses Azure is a "Microsoft shop", in terms of programming tech stack. Plenty of people use Azure as a cross-cloud redundancy play, or because they do business with Amazon competitors that refuse to have their data on AWS.
It's the reverse in AWS, they have a bit more Windows market share than Linux. And he also backed up your statement.
His take (which was just his speculation) was that the virtualization stacks ran those OS's more efficiently. HyperV with Linux VM's and KVM with Windows VM's. I have no idea if that's true or not.
If you plan on shipping server software, I expect it to run on Linux, particularly in a container.
.NET definitely has first-class support on both Linux and Windows.
https://learnxinyminutes.com/docs/csharp/
it looks more like Java than c++ to me.
On a different note, I will state this again: I think microsoft will acquire Canonical one day.
- It supports structs, and unboxable structs called "ref structs". They let you to avoid GC/heap-alloc for small data structures with clear ownership semantics.
- It uses type reification for generics instead of type erasure. Combined with value-types, this can produce very optimized code. The downside is larger binary sizes.
- It allows unsafe programming with pointer arithmetic where performance is truly critical. This is where C# is closer to C/C++ than Java.
- Supports native compilation to a target architecture. This lets you avoid JIT overhead and lets you ship self-contained binaries (like Go).
- .NET supports source generators which can let you keep maintainable code and generate an optimized one.
Performance aside, C# is a very pleasant language to use, and .NET is a delightful runtime. I'd say give it a try and see it for yourself.
The main target is enterprise software systems and web backends. Asp.Net is a very mature framework for Web APIs and MVC applications.
Entity Framework is a decent ORM solution supporting several popular DBs with tools for code generation and migrations, and LINQ is a complementary query language that is useful for in-memory collection operations as well as db queries.
And I think the largest value is in the NuGet software library ecosystem, things like MassTransit for service bus and queue communication and many many others.
It’s a well-supported ecosystem of first party and third party software with a lot of modern features, but definitely still in the realm of managed runtimes so not a competitor to C, Rust, etc.
*ASP.NET Core
Microsoft relies on "fire and motion"¹ to sell to developers, and typically reimplements instead of working with open source projects.
Recent example: only Visual Studio 2022 supports .NET 6, time to upgrade.
There's also https://github.com/dotnet/maui from Microsoft but it's not mature yet.
The times sure have changed
I don't have data, however, from what I see/read, there is a tendency of developers/companies using .NET environments to just go for Azure. It looks like it comes as a big package, while it doesn't have to be like this, but hey, that's how it works especially in enterprise companies.
How much does/did it cost to port .NET to mac/Linux? Compared to what you gain just by having a few customers in the cloud, maybe crumbs: provide a good dev environment, people will clearly opt for your cloud services.
From the .NET Core 1.0 announcement made in June 2016: https://devblogs.microsoft.com/dotnet/announcing-net-core-1-...:
> Today we are at the Red Hat DevNation conference showing the release and our partnership with Red Hat. Watch the live stream via Channel 9 where Scott Hanselman will demonstrate .NET Core 1.0. .NET Core is now available on Red Hat Enterprise Linux and OpenShift via certified containers.
Maybe because it's not a popular language, but the F# experience for me has been bad - tried it two years ago with F# 5, and recently with F# 6. The documentation in both cases was immature/inconsistent to broken.
The usual features (e.g TP's) that are used to sell the language though I feel are oversold and tbh not the main reason F# users like the language - in fact I think TP's need an overhaul and are a distraction. Inlining code for math (which is just getting an equivalent like feature in the next C#), unions/records together, functions and type inference, leaning to compile time vs runtime dispatch in code style, etc I feel are where its strengths are at. More than features I've found for large scale projects its just easier to spot bad code in F#, less bugs have made it ot Production when .NET teams have tried it, and less of it makes it to code reviews.
I've found F# tooling at least for VS Code more stable than the C# equivalent. But that's not saying much - most languages plugins for VS Code don't feel that stable to me if not JS (e.g. the Java one used to crash on me all the time). In VS Studio, Rider, etc are options.
Documentation should be improved sure especially for people getting into it; but it is the smaller language and documentation goes out of date quickly. This obviously penalises the new starter without a mentor/senior dev to teach them in the job/elsewhere.
Early on one of the big selling points was reading all those type provider articles. Then I've tried it a couple of months ago and it was such a broken mess - not worth the time. From what I've read this isn't just my experience.
With, record types, switch expressions, top level expressions, file scoped modules, global usings, etc. C# removed a lot of cruft. Roslyn source generators actually work unlike type providers. If they improved the REPL/scripting aspect of C# I think there would be very little arguments for F# other than catering to FP crowd who like to like |> to |> write |> like |> this instead of this(like(write(to(like))))
Anybody knows if this affects the ease of installing on Debian 11 in any way?
MS provides packages for Debian. https://docs.microsoft.com/en-us/dotnet/core/install/linux-d...
[Service]
Type=simple
ExecStart=/usr/bin/dotnet /some/path/myapp.dll
WorkingDirectory=/some/path/
Restart=always
User=someone
Group=someone
[Install]
WantedBy=multi-user.targetFor digitalocean I followed this post which is probably way out of date now https://www.hanselman.com/blog/publishing-an-aspnet-core-web...
For the k3s site the source is here https://github.com/EliotJones/LetsShip/blob/main/kubernetes/... though worth noting I have set up LetsEncrypt incorrectly but that's my lack of k3s understanding.
https://docs.servicestack.net/ormlite/litestream
If you don't want to use Docker, you can also easily deploy to Linux using rsync + supervisor:
Literally 'dotnet {AppName.dll}' and that's it.
Tbh I'm surprised they haven't been acquired by Microsoft yet. They're clearly aiming straight for it.
I probably would do the same if I were them, too.
Seems best avoided by any company which is not already a Microsoft shop.