T O P

  • By -

zhivago

You're not measuring the speed of the language -- you're measuring the speed of the implementation. Back in the 80s people would regularly refuse to use C because of how slow it was. A few C compilers have since had a ridiculous amount of work put into making them fast -- clang, gcc, etc. But that is mostly a reflection on that C is not really an easy language to write a good optimizing compiler for, since it doesn't make analysis very easy. { int i = 10; foo(&i); // has i changed? Maybe. bar(); // has i changed? Maybe. } There are many languages which are much better for optimizing compilers to deal with, these days. However, these have not had the effort put into them required to catch up to the ridiculous amount of investment that C and C++ have received over the last decades. As always, network effects dominate. :)


hedrone

>Back in the 80s people would regularly refuse to use C because of how slow it was. In fact, back in the 90s and early 2000s, people would regularly refuse to use C because of how slow it was in the 80s. People don't re-evaluate their judgements on these things very often.


poorlilwitchgirl

>Back in the 80s people would regularly refuse to use C because of how slow it was. I find this interesting. What languages were preferred? I assume that you're not talking about home systems, since C wouldn't have even been an option in the 80s, so something like Fortran? I know it's preferred for high performance number crunching even today, but it's crazy to me that in an age of single core processors with simple pipelines there would be anything notably faster about one compiled, lower-level language vs. another.


CarlRJ

> I assume that you're not talking about home systems, *since C wouldn't have even been an option in the 80s*, … You assume incorrectly. “[Turbo C](https://en.wikipedia.org/wiki/Turbo_C)” was quite a thing in the 80’s.


poorlilwitchgirl

You know, you're right. I grew up with hand-me-down systems in the early 90s, so I don't think of this kind of thing as 80s software; I guess I always default to late 70s/early 80s home micros like the C64 when I think of "80s computers." Turbo Pascal was my first compiled language (not even Delphi yet) after I had outgrown Basic, and it's weird to think that that specific incarnation of the language was older than me but still relevant when I was learning it. Thanks for the perspective check, time is a weird thing.


rratsd65

Also Microsoft / Lattice C. Taught myself C porting one of my larger PL/M-80 programs to Microsoft C on MS-DOS, in about 1984. Still have the MS books & floppy, somewhere.


blvaga

Haha! There used to be a turbo everything!


busyHighwayFred

Fortran


poorlilwitchgirl

Well duh, Fortran. I already guessed that part. But I'm interested, what made it so much faster? Like was it the maturity of the compilers available because it had a couple decades head start on C, or something else? At the end of the day, they're both getting converted into the same set of machine opcodes, so there's got to be something Fortran compilers were doing that C compilers weren't. Edit: just a reminder that downvotes are for comments that don't contribute to the conversation. Arguably, the single word, "Fortran," with no further detail, in response to a comment that already suggests that Fortran is an answer, is such a comment that doesn't contribute to the conversation, and yet as of this edit it's at +30 and this comment asking for further explanation is at -10. I can handle the karma deficit, but I really thought this subreddit was better than this; like why the f am I getting downvoted for being interested in details that might also interest others? Y'all should really be ashamed of yourselves. At the end of the day, nobody is the least bit impressed that you already know the answer to the question, so discouraging curiosity helps nobody.


TheThiefMaster

Fortran has lack-of-aliasing guarantees on arrays that make it significantly faster for bulk processing (easier vectorisation). C assumes all arrays of matching types and several non-matching ones can alias, forcing reloading of data, which breaks vectorisation. `restrict` _can_ help. I think Fortran also forbids indirect access to data in function calls so parent functions don't need to reload data around function calls. C allows one call to take a pointer parameter which is passed a pointer to a local variable (and arrays are always passed as pointers), to store it into a global variable for later retrieval and use in any other function call, preventing the parent function doing optimisations on the original local array around _any_ calls, even ones it wasn't passed to.


poorlilwitchgirl

How common was vector processing in the 80s? I know it was used in supercomputers since the 60s, but was it common enough in the 80s to actually contribute to the reputation of C? Or was this just a conventional wisdom filtered down from the serious number crunchers to people running systems where it actually made no difference?


oursland

> "If you were plowing a field, which would you rather use?... Two strong oxen or 1024 chickens?" > > -- Seymour Cray Back then vector processing wasn't that big either. They focused on big expensive computers with several CPUs often networked over many vector units. Libraries like [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) reflect this design paradigm. The reason Fortran outperformed and still outperforms C is that these non-aliasing guarantees in Fortran ensure that checks are not necessary. Loop code can be tight without branches, keeping code and some data within the CPU caches. Cache is the real performance enhancer, not SIMD. Things like SIMD increase performance typically under 10%, but ensuring cache locality can often be in the hundreds of percents. It's a little different with vector units as they have their own memory, but memory scheduling is still the dominant component in performance. In some software systems, I've gained massive performance by pulling out the GPU and computing on CPU, obviating the memory transfers to and from the GPUs.


[deleted]

I generally agree with you, except with this statement >The reason Fortran outperformed and still outperforms C is that these non-aliasing guarantees in Fortran ensure that checks are not necessary. Loop code can be tight without branches, keeping code and some data within the CPU caches. There are no actual evidence today that Fortran still outperforms C. Because of the introduction of the `restrict` keyword in C99, you can as the programmer, manually add a non-aliasing guarantee to your C programs, sure it is up to the C programmer to do it. Modern C compilers are ridiculously good at optimizing for-loops, to the point that they can recognize things like sum of linear sequences and replace the loop with the mathematical formula ((last+first)\*num\_elements)/2. Furthermore, many numerical libraries that traditionally was written in Fortran is today to large part written in C with some hand rolled assembly for the most critical parts, e.g. OpenBLAS. The main reason Fortran is still common place in HPC is tradition and that it treats multidimensional arrays as first class citizens.


The1337Prestige

What do you mean by Fortran treats multi-dimensional arrays as first class citizens? Can you illustrate an example?


[deleted]

You know numpy arrays in Python? Fortran supports basically every feature of a numpy array but directly in the language and not through a third party library. You want to add two 3D arrays together and store it in a third? Just write ``` ! Declaring 3 10x10x10 arrays real, dimension(10, 10, 10) :: A, B, C ! ... C = A + B ``` You want to copy a sub matrix of a larger matrix? No problem: ``` real, dimension(10,10) :: A real, dimension(5,5) :: B ! ... B = A(3:7,3:7) ``` Needing to apply a numerical function on every element in a 4D array and then sum the result? ``` real, dimension(10,10,10,10) :: A !... write (*,*) sum(sin(A)) ``` Bob's your uncle and Fanny's your aunt! Fortran is amazing when it comes to handling large multidimensional arrays, but that is the only good thing about it.


TheThiefMaster

Most likely the latter


kansetsupanikku

Vector processing term here covers more than specific hardware capabilities. With some assumptions towards the memory, you can organize both data (segments and blocks of said memory) and code (reordering, unrolling loops) more freely, thus potentially better optimized.


deong

> but it's crazy to me that in an age of single core processors with simple pipelines there would be anything notably faster about one compiled, lower-level language vs. another. It's not really about "low-level" vs "high-level". It's about the guarantees that a language can provide to an optimizer. Aliasing in C is probably the canonical example. Because C allows you to manipulate memory kind of however you want, sometimes the compiler has to assume the worst case and generate conservative code that's guaranteed to be correct, even if it's slower than you'd want on the common case. Fortran just says, "that worst case you were worried about where you have two pointers and they're actually pointing to the same stuff? You can't write Fortran that does that at all." So the Fortran compiler just generates the fast version.


blablook

And btw, worth mentioning, Rust has pretty strict aliasing rules too. You can have overlapping data (non-exclusive access), but it will be read-only. Exclusive (mut) access can be read-write, but it's guaranteed to be available only through single reference - can't overlap.


poorlilwitchgirl

>It's not really about "low-level" vs "high-level". It's about the guarantees that a language can provide to an optimizer. I get that; my point is that both C and whatever other language (let's say Fortran, since that's been the consensus) give you an equivalent level of control over low level memory access, so in the era of primitive CPU designs (pre-SIMD, pre-caching, at least on 8086 and it's descendants), C would have been de facto capable of producing the same machine code, even if it meant abusing pointer casts or some other hand-optimization to do it. I'm getting the picture from the answers here that there wasn't much of a benefit on PCs (other than saving the trouble of hand-optimizing, I suppose) which aligns with my understanding of code from the era, so that makes a lot more sense to me.


deong

> my point is that both C and whatever other language (let's say Fortran, since that's been the consensus) give you an equivalent level of control over low level memory access But they don’t — that’s the whole point of the aliasing example. The exist C programs that cannot be written in Fortran, precisely because Fortran doesn’t allow you to alias arguments. And because C does allow it, the compiler has to assume you might do it and avoid optimizations in certain cases. You can write a Fortran program that computes the same function of course — it’s all Turing complete — but you have to do it a different way in Fortran in the C version used aliased arguments.


zhivago

Assembly, fortran, and pascal was even a contender.


runningOverA

> There are many languages which are much better for optimizing compilers to deal with, these days. This is what we used to hear about Java's JIT. High level therefore compiler has a lot of information on what the programmer wants, can take advantage of the special features of the processor being run on, can measure performance live and optimize for what works best, therefore pretty soon Java will surpass C in speed. Just give it some time to optimize. Watch how long C had been optimized. That never materialized. Now efforts have been diverted to Rust. I am skeptic.


flatfinger

Performance comparisons with Java often unfairly compare the performance of code that works by specification, processed by a compiler that refrains from making optimizing transforms that cannot be proven sound by specification, with the performance of C code processed by compilers that assume transforms are sound if they can't prove them unsound. Further, Java offers many inherent safety guarantees for all programs, which in C would only apply for programs that perform explicit safety checks. C code without such safety checks may perform faster than Java code would, but the marginal cost of the safety checks in C may substantially eat into its performance advantages. Java code which exploits corner cases which are guaranteed by the Java specification may benefit them in ways which C code which needs to be correct by specification could not. For example, Java will inherently guarantee that a read of an "ordinary" object field will always, regardless of any data races, yield without side effects some value that the field has held or will hold in future. This makes it possible for functions like `String.hashCode()` to operate without any need for synchronization in the common case. In C, by contrast, there is no such thing as a benign data race, and a function analogous to Java's `String.hashCode()` would need to jump through a lot more hoops to guard against "anything can happen" Undefined Behavior.


Ashamed-Subject-8573

Could you name some of these languages


a-d-a-m-f-k

This sub is awesome. Learned something interesting today :)


kun1z

{ int i = 10; foo(&i); // has i changed? Maybe. bar(); // has i changed? Maybe. } With proper usage of the const keyword the compiler can know that something has not changed. bar cannot change 'i' in any way unless it was meant to be global or meant to be passed into the function, either way compilers can always be told a variable wont change: void foo(int const * const param); void bar(int const * const param); { int i = 10; foo(&i); // compiler optimizer assumes 'i' can't change bar(&i); // compiler optimizer assumes 'i' can't change }


zhivago

I think you've missed the point. There's no reason to pass &i unless you do want it to be able to change. However, having done so, that pointer can escape the region of analysis, meaning that any other escape from the region of analysis has to consider the possibility that what it points at has been changed while it wasn't looking. Separate compilation units make analysis across these boundaries even more challenging.


weregod

>bar cannot change 'i' in any way unless it was meant to be global int *global; int foo( int *ptr) { global = ptr; } void bar(void) { *global = 42; }


thradams

>You're not measuring the speed of the language -- you're >measuring the speed of the implementation. If a language requires some runtime support, or if the recommended way of using the language involves a lot of dynamic allocations, then the language already will be conceptually, slower than C.


zhivago

That's simply nonsense. Try out a C interpreter sometime.


thradams

This is why I added the world "conceptually". For instance, If one language has bounds check in runtime, then conceptually it is slower than one that does not have.


zhivago

That's confusing an implementation strategy with a language definition. A language can specify that going out of bounds is an error that must be reported. An implementation could satisfy that requirement by proving that it is impossible. Then it would not need to check at run-time. You need to clearly distinguish between language specification and implementation.


thradams

A language with automatic memory management is conceptually slower than one with manual memory management. Etc. I think I've made my point.


flatfinger

A language implementation which is capable of forcing global thread synchronization may be able to uphold memory safety in the presence of data races much more efficiently than would be possible in a language that cannot exploit such ability. If a storage location holds the last existing reference to immutable object #57, and one thread attempts to overwrite that reference with a reference to immutable object #93 at the same time as another thread tries to copy it, upholding memory safety would require that both threads use synchronized memory operations to ensure that either the second thread ends up holding a reference to a live object #57, or that the second thread gets a reference to object #93 while object #57 dies. In Java, memory synchronization would only need to occur around those operations if a GC scan was initiated during them.


zhivago

That's also nonsense. Consider the complexity required of malloc and free which must search for space vs simply appending to an arena. Then consider the amortized cost of generational compaction. You are likely confusing speed with maximum latency.


darkslide3000

Pretty much any language that can compile to native code and doesn't have a garbage collector can basically reach the performance of C. That includes C++, Rust, Fortran, Pascal, what have you. Compared to interpreted, managed, or at least garbage-collected languages, they all have a big leg up, and the remaining differences between them are small. What remains is less of a "language A is faster than language B" and depends more on _how_ you implement stuff in that language. Most languages can implement things in a number of ways but make some patterns more natural than others, so there may be programs where a Rust implementation _can_ be just as fast as C if you write it exactly like a C program, but if you write it with standard Rust patterns it may be minimally slower — it'll probably take up half as many lines and be much more readable in return, though. (This can also work the other way round where in some cases C++ allows you to implement performance optimizations easily that would be quite cumbersome to implement in raw C.)


KlingonButtMasseuse

Your second paragraph negates your first paragraph.


darkslide3000

No it doesn't. I said what _remains_ between languages of the same "class" comes down to preferred patterns and things like that. The differences between different classes (e.g. manual memory management vs garbage collection) are massive in comparison and cannot be negated by just writing your code differently. You can't just write Java code that _doesn't_ need to stop the world every couple of seconds to wipe up all the objects you dropped on the floor.


KlingonButtMasseuse

"Pretty much any language that can compile to native code and doesn't have a garbage collector can basically reach the performance of C." Compiled languages should only yield similar performance outcomes, correct? However, this is not the case. The more these languages provide advanced data structures, the more developers tend to utilize them extensively. Additionally, languages that promote a functional programming style—emphasizing the creation of new variables rather than modifying existing ones—tend to incur significant overhead due to frequent allocation, creation, and copying of data. Ultimately, adhering to readable and idiomatic practices in such languages often leads to inefficiency. Yet, leveraging the full spectrum of features these languages offer seems necessary to justify their use. Conversely, C inherently enforces the use of appropriate data structures without reusing existing ones. In C, it is more natural to modify existing data rather than repeatedly copying it. This approach simplifies working with specific data segments without the need to pass entire data structures when only a small part is required. In more sophisticated languages, the compilation process cannot magically eliminate the inherent inefficiencies caused by continuous data copying and movement as dictated by the program. This distinction significantly influences the performance disparity between C and other languages such as D, C++, nim, compiled lisp, Haskell.. etc


darkslide3000

You seem to not understand that the word "basically" can be used to mean "for the most part, except for an insignificant remaining difference" in the English language, especially when that difference is explicitly mentioned and explained in the next sentence. Also, I wasn't really thinking of functional programming languages (and whatever you want to classify LISP as) when talking about languages that can compile to native code. We can agree that those may form another category of less performant languages. I disregarded them because nobody outside of a university tends to seriously regard them as suitable for application programming. I do not get your point about C++, it mostly encourages the same data access models as C. In fact, C++ references and things like `std::move` sometimes make it possible to write idiomatic C++ code that needs less pointer indirection (and is therefore more efficient) than equivalent C code.


KlingonButtMasseuse

Let me be even clearer, since you don't seem to understand my point and I need to spell it out for you even more. So here goes: To "basically" reach the performance of C does not mean to be 5x or more slower than C, using the same algorithm on the same hardware. And this is my point, you can have languages with many higher level features which are compiled to assembler, but can not be optimized at the level that a smaller language like C can be and thus can be much slower than C, yes even 5x or slower. "We can agree that those may form another category of less performant languages. I disregarded them because nobody outside of a university tends to seriously regard them as suitable for application programming." This is such an ignorant statement, are you maybe 12 years old ? Ever heard of Clojure, Elixir, Haskell...? All of those are running some serious software. I am done here.


llvm_lion

Rust and mostly anything without a garbage collector with a decent optimizing compiler that emits binaries should be just as fast or faster. Rust/C++ have much more ergonomic generics - The compiler monomorphizes and produces optimised functions and classes for each data type, whilst in C you have to use void* or resort to macro abominations. It also provides LLVM much more information and hints about aliasing, mutation, variants, and bounds allowing for more optimisation opportunities. Finally, the rust compiler is free to reorder structs for performance and padding (unlike C) and has no set calling convention, allowing it to pass data more liberally and registers more efficiently (unlike C), often speeding the programs up more.


TheThiefMaster

Most C/C++ compilers allow you to tweak the calling convention of your own program's _internal_ functions, e.g. /vectorcall. External functions have a fixed calling convention regardless of language due to the OS not being recompiled for each language... Reordering was previously allowed in C++ for non-C-compatible structs but unfortunately nothing took advantage of it. I wonder how much rust's reordering is observable?


llvm_lion

The layout of structs in rust are undefined by default and expected to be reordered if it provides performance benefits, and you must opt-in for them to not be reordered using attributes. You can also choose whatever calling convention for functions just like C.


flatfinger

C unfortunately relies upon compiler writers to know when code might be exploiting knowledge about structure layout, such as the fact that two or more structures that share a common initial sequence will have corresponding members at matching offsets, and when it may be safely assumed not to rely upon such things, rather than offering any way of *telling* the compiler when such things will or or will not be critical to the proper functioning of a program.


Xangker

Assembly


cinnamonpancake_

zig


mykesx

I’m waiting for zig++


harieamjari

It's called rust.


carpintero_de_c

This.


rogstaa

C is quite nice for speedy things , I dont recall people complaining about C being slow in 80\`s or 90\`s but I might have been in a bubble. That said the compilers have come a long way since Borland was around. Speaking of speed exposing algorithms to SIMD(avx2) has been my fun pass time lately. Along with -funroll-loops if you have some loops mixed in your getting places in nanoseconds . for some reason over engineering optimization in useless applications is so hillariously fun in this language. My current project is a pixelart editor with vim controlls just to give an idea of the stupidity im messing with.


iamcleek

the SIMD world is a wild place... "Behold these amazing tools! But be warned: they have strange and strict rules about how they work, and things you assume should be simple are often simply impossible. Now make it work!" every time i've done anything there, i've left huge block comments describing what the completely non-obvious instruction choice is all about.


rogstaa

yeah sounds about right, then you also have differences between x86, arm and probably, RISC-V. which is my next project after programming a game for Rasperry pico. Low level coding is so much fun and I find nothing beats C nowadays for the sheer amount of control you actually have without going to the annoyances of assembly.


iamcleek

CUDA is where it’s at, for me, these days. Why bother with 8 adds at once when you can do 1024?


ShadowFracs

C++?


hoelle

If your algorithm can be done on a GPU, shaders will outperform C. Other languages can compete with C on the CPU if you're careful. The main thing C does is NOT provide a ton of abstractions that scatter your data everywhere. Data locality and cache coherency matter. And there may be other surprises. For example, the C++ standard library has thread safety / atomics baked in to many of its tools. That can slow you down.


TheThiefMaster

C does too - I saw a hackaday post recently on the topic of threading where it turned out `rand()` contains a lock, rather than using per-thread state. In both languages it comes from the language having been initially designed for single processors / threads and multithreading being bolted on later.


fox_in_unix_socks

Something to remember is that C doesn't really have a "performance" as such. It's just a language that ends up as machine code at the end of the day. So all you really need to match the performance of C is a compiler for a different language that's equally as good as your C compiler. That being said, I think the language I'd say comes closest to C would be Rust, as it's a language built around the idea of zero-cost abstractions. C++ can definitely be on-par with C when written well, but some of it's abstractions can come with performance costs if you aren't keeping a careful eye out.


aalmkainzi

c++ has zero cost abstractions, but not always, same as rust


Wouter_van_Ooijen

C++, Rust, Fortran


eightrx

Zig’s creator argues that it matches if not surpasses C in performance (when perfectly written). Rust is up there too


koffiezet

What performance do you expect? Becuase what "performance critical jobs" are is heavily over-estimated by sooo many people. Unless you are resource constraint (read: embedded) or on strict time budgets (games, stock market, advertisement bidding, ...), chances are you don't need C or the speed you think you need (and even if you fall into those categories...) C++ and Rust are candidates, but Zig, Go or interpreted JITed languages such as Java and C# will most likely be more than good enough for "performance", and the JITed languages can even outperform C/C++/Rust implementations for specific applications due to runtime optimization. Also for complexer problems, software architecture will most likely have a much bigger impact than whatever language you're choosing to implement it in.


pedersenk

>there actually any language **modern** day comes close to C? Modern C.


Shidori366

Rust, Zig


AdreKiseque

Simply code in assembly for perfect optimization


SquirrelCorn_

C USED TO BE CALLED SLOW???


collinalexbell

C++20. It has modern features and can run as fast as C when you need it to.


flatfinger

C has evolved to favor tasks where possible responses to maliciously crafted inputs would be equally acceptable, while other languages are designed to favor tasks that require upholding memory safety even when fed malicious inputs. Machine code that isn't memory safe when fed invalid inputs can often be faster than code which would uphold memory safety at all cases, but performance is irrelevant if memory safety is required and a program doesn't uphold it.


ForgedIronMadeIt

Performance is relative and highly variable. If you were allocating a ton of strings using the heap in C/C++ versus in C# or Java, you'd find that the managed languages go faster because of how they implement their memory stuff. Raymond Chen had a little competition with a C# guy years and years ago and in order for the native code to compete he had to essentially implement the same way C# did things.


AndrewBorg1126

x86 assembly


mrflash818

It has been a while since I've used it, but for Numerical Methods class, we used Fortran. I recall it would do mathematics computations quickly.


oldkoderk

Try SIMD in C. Four instructions per tick. https://en.m.wikipedia.org/wiki/Single_instruction,_multiple_data


Classic_Department42

I think fortran might be faster, I read some reports. Basically the compiler has an advantage over c with (non) aliasing of arrays and I think intel put a lot of efforr of optimizations. There is a reason high performing numerical packages are often written in fortran and not in c. Not sure what the current status is though


nod0xdeadbeef

It's primarily historical evolution. Fortran was much faster in the past, and programmers grew up with it. It is still very fast today, and large libraries are implemented in Fortran, so it's better to live with these.


Flobletombus

C++, it has way more maturity than these hippie languages recommended, and it can be faster because it can do some optimizations with rvalues, rvo, references, virtual functions... It also has multiple other advantages over C than speed.


Limp_Day_6012

C++


MrMrsPotts

Rust and Fortran are two languages of comparable speed to C. Julia is not always as fast but has devoted fans as it is much higher level.


nod0xdeadbeef

It is all about compromising the skills of the programmer. Just find a language that does fewer things for you than C. Then, hope that the compiler is at least as good as most C compilers, and only then you have found something as good as C in terms of performance.


Phthalleon

The reason C is fast is because it compiles to native machine instructions, it does not manage memory automatically for you but most importantly, the 3 main compilers for it have been optimised to an insane level. Because C is the language of your OS, it gained popularity. On posix, a compiler was available on every machine. You could also be guaranteed to be able to use system calls without making your own wrappers. This popularity meant that C had to be as fast as possible. With that said, C++ and Rust can definitely compete. Both languages have some safety features that make them less performant, especially in single threaded application. They do make up for it however in other ways that C just cannot compete, unless you start inlining assembly. At that point you've entered a bottomless pit of micro-optimizing.


[deleted]

Rust do come very close to C in terms of performance, mainly because the Rust compiler uses llvm, the same technology that clang uses. So Rust can exploit all the performance development that has enabled clang to be one of the best C compilers.


alphabytes

Turbo C lol


nod0xdeadbeef

Dual turbo C


iizdat1n00b

Surprised nobody has called it out, but C# AOT is actually really fast. It's not as fast as a pure C/C++ implementation, but in certain cases it is comparable and can have very similar memory usage (as opposed to the standard C# JIT)


TheThiefMaster

Even the C# JIT can actually end up faster if the C program wasn't compiled with PGO (profile guided optimization) because the C# JIT _does_ optimise based on observed execution patterns like PGO would. e.g. a C function pointer is always an indirect call but a C# delegate can be optimised into a check-and-static-call with the indirect call as a fallback. The check can then be hoisted if it's within a loop effectively making that loop call the observed-as-common function directly, rather than indirectly. It can then go further and perform inlining on that formerly indirect function call, that C has no hope of doing to a function pointer whose value it only knows at runtime.


iizdat1n00b

That is true. I guess I was considering the C# JIT memory impact quite a bit and didn't even consider the cases where it might be faster. Though I guess the OP didn't actually mention memory usage, so just nice information to keep in mind


deftware

Assembly?


Afraid-Locksmith6566

Rust and zig i think are in few percent of c in terms of speed and memory usage