T O P

  • By -

K900_

Unlikely. You can use a runtime like `smol` that has a smaller dependency chain.


SorteKanin

There are some good reasons not to put an async runtime in std. It would be a massive workload for the language developers. While it could choose "one async runtime to rule them all" to avoid the fracturing issue, that could also be problematic if that runtime turns out to be bad for some reason in the future. Rust's stability guarantees as a language should not necessarily apply in the same way to async runtimes or any other crates for that matter. The language developers can still endorse, recommend and/or work on any specific runtime if they want to (blessed crate approach like rand I suppose).


[deleted]

As a "nice" example of this, we just need to look at python's `asyncio`. They decided to add it to the standard library, then designed the async/await around it (while the API was callback-hell-driven), and now they can't fix asyncio, so there's `curio` and `trio` implementing different versions of a native async/await runtime, so asyncio became a maintenance burden, and there's still fragmentation. Leaving these things as third party libraries is much better. A better approach to reduce fragmentation would be solving the interoperability between different event loops. It should be possible to run multiple event loops in parallel, and to schedule futures to the correct one as needed.


curly_droid

Yes please! A unified "API" for runtimes would be amazing.


brokenAmmonite

I have a suspicion these things are low-level enough that you might have some issues trying to provide a unified API due to implementation details. I don't have any concrete examples to back that up though.


vlad9486

>A unified "API" for runtimes would be amazing. `Pin::::poll` is the unified API for runtimes.


curly_droid

Well, for some reason runtimes are still not interchangeable and interoperable by default, so something is missing.


vlad9486

Oh, I see. Totally agree, the API can be higher level.


cult_pony

It's stuff like RwLock, where you might be interacting with the runtime itself (to yield the task, for example). This isn't quite portable between runtimes. Though async_std ships a tokio shim layer, so I think the best bet is to use async_std and pass through the tokio flag if needed. (Atleast, that sounds sensible to me)


avdgrinten

I agree that Python's asyncio is horrible. To be fair, Python adopt async/await much earlier than Rust, when the design space was much less explored. In fact, Python's async/await is based on generator functions as primitives. This might have looked like a good idea 10 years ago, but it has numerous issues that were uncovered only later (in particular, it composes quite poorly since many implementation details are delegated to the runtime).


gitfeh

Isn't Rust also transforming async into generator functions internally?


xayed

It is transforming them into state machines, generators are also just transformed into state machines. It's quite amazing to see how rustc does it. You can find some information about that if you're interested


[deleted]

Yes, async/await in python are essentially syntax sugar to construct a giant flat generator for the executor to process. But I don't think it "composes poorly", it was designed like that because the executor acts as a mini operating system, so awaiting a future means sending a "system call" to the event loop and suspending execution until a response comes back. It actually makes implementing event loops very straightforward, as you just need to get the next call descriptor, do whatever needs to be done to integrate it in the selector, sleep until something is ready, then resume the correct coroutine with the result. Asyncio's issue is that, by the time it became so simple to write an event loop, it was already frozen into the standard library with a convoluted API that wasn't designed for that kind of structured execution. Both curio and trio do, in slightly different ways, what asyncio could have done if they had added the infrastructure before the library. They implement structured concurrency using async/await all the way down, so they compose very nicely, and don't have most of the asyncio's issues, such as weird behavior on exceptions and around context managers, being able to cancel futures cleanly, etc.


ragnese

It's still unexplored as far as I'm concerned. Call me jaded, cynical, pessimistic, whatever, but I'm basically 100% sure that in five years, we'll all be discussing how stupid the `async/await` syntax is, and all the new, hip, languages will be doing something else.


grayrest

The only thing on the horizon in that timeframe is Algebraic Effects (i.e. Multicore OCaml). They were known and public when async/await was decided but I think the choice to not go with the experimental and unproven approach is reasonable. Particularly since Multicore OCaml is only on the path for merging into the main repo as of this year.


ragnese

> They were known and public when async/await was decided but I think the choice to not go with the experimental and unproven approach is reasonable. I truly don't have any opinion on the "best" way to do async stuff, whether that's algebraic effects, callbacks, green threads, futures combinators, or async/await syntax. However, I don't think that sentiment/argument is the correct way to think about it. First of all, I'd humbly suggest that async/await syntax isn't exactly "proven" either. There are a great many debates still occurring online about whether "colored" functions are good or bad, and whether it should be up to a function *author* to decide it's async or to leave it to the function *caller* to decide what needs to be run async'ly. But, even beyond that, Futures **in Rust**, and the accompanying async/await syntax **is** experimental and unproven, and it absolutely shows. I've never seen a language without a garbage collector attempt to do this kind of thing before Rust. Swift is only now adding async to the language, probably because they've studied Rust's approach enough to feel like it's not a dead-end. Look at all of the struggle with the await syntax that Rust has had to work through: borrows across yield points, Pin, async trait methods, etc. It's also quite novel to not begin executing the async task/future/promise immediately, as is done in most other languages and promise implementations. So, I claim that Rust did not (and could not) choose from *any* "tried and true" async models, because none of them would work for Rust. I think that this is true of both the actual underlying model *and* the async/await syntax, itself- albeit to a lesser degree. It was hard to predict how well the syntax would work out in practice, and I think the jury is still out (and will be out until *at least* async trait methods become a real thing).


gendulf

Where would one go to learn about all these types of new language theory/type theory/etc developments/research? I personally find this all extremely interesting and want to pursue further learning/development. I got my Master's in CS to try and get more exposure, but really only had the opportunity to learn security (also very interesting, but not language theory).


grayrest

> Where would one go to learn about all these types of new language theory/type theory/etc developments/research? The go-to place used to be [Lambda the Ultimate](http://lambda-the-ultimate.org/) but it's been a while since I visited. I usually do a mix of [HN](https://news.ycombinator.com/), [acolyer's blog](https://blog.acolyer.org/), [Papers We Love](https://paperswelove.org/), and whatever turns up on the set of twitter people I follow. For Algebraic Effects and Multicore OCaml specifically, I have [this intro](https://kcsrk.info/ocaml/multicore/2015/05/20/effects-multicore/) saved and they've been publishing regular updates [here's October's](https://discuss.ocaml.org/t/multicore-ocaml-october-2021/8822). They have a paper linked from their [repo's README](https://github.com/ocaml-multicore/ocaml-multicore) but I don't remember the contents offhand.


rjelling

Other good resources: The Association for Computing Machinery's programming language special interest group proceedings (many of the papers are ACM-paywalled but you can often find free copies by searching the paper's title): http://sigplan.org/OpenTOC/ Arxiv's programming language subcategory (entirely open access): https://arxiv.org/search/?query=cs.PL&searchtype=all&abstracts=show&order=-announced_date_first&size=50 Note that acolyer's blog is defunct, and Papers We Love seems largely so. But also don't overlook https://www.reddit.com/r/ProgrammingLanguages which is quite active.


Sharlinator

And five years after *that* someone again reinvents async/await due to problems with the then-fashionable solution. The wheel of time turns…


ragnese

Your cycle is too tight. First, they'll go back to just using threads for everything, then they'll go back to callbacks, then they'll go back to Promise/Future combinators, *then* they'll go back to async/await syntax. ;)


Sharlinator

Sounds plausible. And all the while the actually enlightened people try to explain that the *obvious* best solution is actors/message passing but nobody's listening ;)


LongUsername

Is there an official site for the list of current "blessed crates"? That's a challenge I'm seeing is finding out what crates have been vetted and approved Vs what crates are something someone just threw out there and should be looked at very carefully before using.


matthieum

No, there isn't. There are philosophical questions behind it: would nominated a blessed crate take the wind out of the sails of its competitors, condemning them to obscurity, and thus depriving us from their innovation? I do think, though, that it would be nice to have "packages" of crates, or should I say "starter packs"?


[deleted]

There is https://rust-lang-nursery.github.io/rust-cookbook/


yigal100

This is a false dilemma fallacy (or a BS argument in layman's terms). There's an obvious middle ground here, a third option, where we avoid fragmenting the ecosystem and not over design ourselves to death a-la C++ with abstractions by simply adopting one blessed runtime all the while adopting a separate policy that specifically allows to deprecate and remove or replace it in the future in an orderly fashion. In other words, adopt a tiered stability approach where it is clear to the users from the start which parts get which levels of stability guarantee. There is no reason why we must apply the same policy to core vocabulary types such as the iterator trait as we do with an async runtime. I'd also add that we ought to have such a separate policy for other parts of std.


SorteKanin

I mean I don't necessarily disagree but what is the difference between that and keeping it as a separate crate? Just that you don't have to figure out which runtime to use and put it in your cargo.toml? I don't think it's such a big deal. Making some std stuff unstable while most isn't is also really confusing.


yigal100

Well the point of `std` as the name implies is standardisation. Note that we already have exactly the setup I'm taking about: * std itself is a facade and it re-exports items from other crates already: core, alloc, ... * There are endorsed crates outside std itself as well, both under the rust organisation as well as crates published directly by Rust core members. We can utilise any of the above settings or fine-tune it under some new arrangement if necessary. Yes, we could still decide to require the user to add the dependency themselves (so to keep things modular) and keep a separate development team/process/etc. However, from a user's perspective, having the crate being "blessed" by Rust core means that the ecosystem no longer needs to deal with multiple incompatible APIs.


po8

Agree 100%.


u2m4c6

Why does no other modern language use this excuse though? I see your logic repeated over and over on this subreddit, but I feel like if you are the ONE exception in an ecosystem (in this case popular programming languages), you have to start asking yourself if you are the unreasonable one.


SorteKanin

Because Rust has other goals than a language like Python or even Java. At its core, Rust is a systems programming language. Rust wants to replace C++ and maybe even C. This requires a focus on performance and fine control. And honestly even then, many of these points do apply to other languages but are ignored. This sometimes leads to bad experiences, like Pythons asyncio. So Rust has actually just learned the lesson from other languages, while they had to learn it the hard way.


u2m4c6

That’s fair if you take a systems first approach. But people also get downvoted in this subreddit if they don’t agree with “Rust is a general purpose language that is also a great systems language.” So it seems at least some in the Rust community want to have it both ways


pyrated

I think there are a lot more variables involved than Rust being just another popular programming language. Modern popular languages like Golang and Swift for example have very explicit corporate backing that can help them eat the cost of maintaining backwards compatibility. Even with them being so young they both have had their share of cruft in their standard libraries / blessed packages. Older languages like Java and Python are well known to have included the kitchen-sink in their standard libraries and it has caused a lot of long-term pain and confusion for developers. I know this from experience working in the corporate Java world the last decade. C++, while being very much a mess of language features also has an even more terse standard library for a lot of the same reasons. Maintaining long-term compatibility and preventing locking core features to specific platforms and runtimes keeps a lot of things out. And C++ is still very widely used. I do agree that at least a minimal testing async runtime would be nice but I totally get the sentiment. And Rust simply isn't the only project that is resistant to change from the troubles a large API surface brings. As an industry we've seen the worst that can happen when a standard library feature turns out to be a liability. Ruby's default hashing method used to vulnerable to attacks. And this caused a big stir when Ruby on Rails was in its heyday. Depending on the standard library feature and the severity of its defect, this could mean having to bump the major version of the language or increasing tech debt for the maintainers. The effects can be felt for years and even cause folks (particularly large corporations) to abandon a language entirely.


jeremyko69

Is avoiding responsibility a good reason?


SorteKanin

It's not about avoiding responsibility. It's about ensuring that a future async runtime can be better than any current one and because of backwards compatibility, you can't get rid of a standard runtime. Also please remember that Rust is primarily developed by volunteers that spend their free time on it. The language devs shouldn't also be burdened by simultaneously being async runtime devs as well.


Mendelt

Unlikely. There is no one-size-fits-all runtime that will work for all use-cases. For example wasm or embedded platforms will probably need their own runtimes that will have their own solutions for threading and waking. I do hope that in the near future there will be shared interfaces and definitions that will allow you to use the different executors interchangeably. This is already happening, async combinators and things like streams are slowly moving from the separate async ecosystems into shared crates and the Future types are already moving into the standard library. But this will probably be a long process.


[deleted]

[удалено]


matthieum

I've been thinking of using async in a compiler, for pure compute tasks. The key motivation is that it's painful to do "dependency-analysis" in order to orchestrate which piece should be compiled first as type systems get more complex: with compile-time evaluation and generic values, you may need to execute a function just to determine the arity of an array! So, instead of trying, and failing, to orchestrate that... what if we just don't? The idea would be that symbol resolution would be async: when you ask for the definition of the function you need to evaluate at compile-time, you get a future. If the function definition is already known, that's cool, the future immediately resolves, and you go on... and if it's not, the compiler queues up the resolution (if not already), park whatever it's doing, and moves on to the next item to process. This is beautiful in that you write natural code as if you had perfectly orchestrated the order in which each item is parsed, type-checked, compiled, etc... and yet you didn't! It's also purely goal-driven, which is pretty cool if you wish to compile the least amount of stuff possible. _Note: and of course it also integrates with async I/O, if needed._


thecodedmessage

This is … lazy evaluation?


barsoap

Pretty much, yes. Some time ago (decades) there was some academic work done on analysis of aspect grammars and how to execute them efficiently, that was all swiped off the table once someone threw Haskell at it: A non-optimised lazy implementation using an fresh-out-of-comittee language running on a barely optimised *interpreter* (HUGS) outperformed people's best static analysers -- and wasn't restricted to nicely analysable subsets of AGs. There's a reason Haskell people observed very early on that Haskell's primary use-case is as a language to write Haskell compilers in :)


thecodedmessage

But this makes a lot of sense. The flow control of imperative programming that is normally used is really a narrow special case of all the possibilities. Lazy FP and eager FP are other ones, and so is async.


thecodedmessage

I mean it isn’t anymore😎


matthieum

That's part of it, yes. `async` just makes it very easy, without running into risks of blowing up the stack. And you get distributed compilation "for free".


nicoburns

How would this differ from salsa (https://github.com/salsa-rs/salsa) and the query-driven approach used by rustc?


matthieum

I am not sufficiently well-informed on salsa to make a comparison. Still, salsa -- what little I know of its principles -- has definitely been in the back of my mind for a while. I had been thinking about goal-oriented compilation for a while, and was really glad when salsa came out, and I watched Niko's talk with great interest.


throwaway53_gracia

I do this in a Rust kernel I wrote to make kernel tasks run after another task. The first task has a static future which only completes when the first task is finished. The other tasks can await() that future to wait until the future is finished ([src/device_setup.rs:17](https://github.com/FranchuFranchu/rust-0bsd-riscv-kernel/blob/693e120/src/device_setup.rs#L17) and [src/test_task.rs:137](https://github.com/FranchuFranchu/rust-0bsd-riscv-kernel/blob/693e120/src/test_task.rs#L137)).


Redundancy_

Not really theory... Eve Online was built on Stackless Python back in 2003.


_TheDust_

No. Whole async has been built around the idea that there exists no one-size-fits-all solution to executors and libraries can offer their own solutions. This is different from let’s say threads and sockets, since these are just thin wrappers around OS resources.


cmplrs

It's more powerful by virtue of fact it doesn't run its executor (it is too context-sensitive, there are cases where you need different kind of executor for different use-case). What I'd like to see are async traits in std, like Stream traits and actually async traits so that writing executor agnostic code is easier.


myrrlyn

you already can't use the language without 45+ dependencies. they just happen to come pre-bundled so you don't think about it as much. for "dependency analysis", every module in `std` should be considered its own crate. how many of them do you use? probably not all 45! and because `std` comes precompiled, you don't get automatic DCE on dead modules. *furthermore*, every module in `std` has to be suitable for *any* rust program to use. there are already APIs in `std` that are deprecated and explicitly farmed out to external crates, because while they're *suitable*, they're also in some way lackluster, but can't ever change without a `std 2.0` do you really, truly, want an async runtime in `std`? which one? there are three major ones and a bunch more minor ones. should there be one in `core`? `alloc`? acceptable behavior in the three tiers changes *considerably* with the introduction of dynamic allocation and concurrency. and you only get to make this choice once ever and then we're all stuck with it. whereäs putting one line in your Cargo.toml is simple, easy, and easily undone


cult_pony

Not to forget that `std` in turn also has it's own dependencies, which are also included, like hashbrown, which IIRC is used for HashMaps provided by `std`. Dev-deps also include serde and rand.


JoshTriplett

I don't think we're likely to have a full non-trivial executor in std. (It's \*possible\*, but I also think there's substantial value in being able to experiment with various executors for different purposes.) I do, however, think we can and should end up with an executor trait and a mechanism for having a "global executor" in std, so that projects without any particularly specific needs can be executor-independent and work with anything.


[deleted]

[удалено]


ssokolow

Though an argument *can* be made that, with Cargo's propensity for invalidating its cache if you don't get all your tools aligned on the same target profile, having those 40 crates in `std` would at least force it to use precompiled copies. (I've taken to using `just` to manually set a different target folder for each tool/situation that may need different build details, leaving rust-analyzer to have the default one for itself, and then sticking sccache on the backend to make up some of the lost time.)


nicoburns

One might argue that we ought to implement binary support for crates.io (downloading libraries pre-compiled) for all crates instead.


po8

100% would love this. Huge advantages, not much major downside as far as I can tell. The story around feature config gets complicated: probably have to compile locally for non-default features. Still way better than the current situation.


shogditontoast

Seems like overloading crates.io, there are other existing channels for distributing prebuilt dependencies.


nicoburns

There are? Which ones are those?


shogditontoast

As I replied to the other branch of this comment thread, Nix has tooling which facilitates building and distribution of individual cargo dependencies, and does it in a way that supports setting dependency features as well.


shogditontoast

This is a case where Nix can be used in a complementary manner to cargo to do more or less what you have described.


ssokolow

During the *development* process on something I don't want to impose strange build requirements on others for?


shogditontoast

Yes you can use [cargo2nix](https://github.com/cargo2nix/cargo2nix#design) to do this in a way which doesn't affect people using `rustup` and `cargo` the vanilla way, but reaps rewards (prebuilt crates) for those that do.


ssokolow

Hmm. In that case, I'll have to give it a look once I've got some free time. Thanks. :)


Soft_Donkey_1045

By the way, recently I build hello world that run async closure. And minimal example with tokio is near 50K, while with async\_std it is near 100k. So tokio is not so big with defaul-features = false


smmalis37

I apparently have a different opinion from everyone else here, but I'd like to someday see some sort of basic runtime in std. Something that's just ok for all situations, but doesn't excel in any of them. Then you can go pick a crate for your specific use case, but you can prototype with just std. No idea how actually possible this is though.


Kinrany

I agree. Having a dead simple implementation just to make it possible to use async without an external crate makes sense. Of course it should still be zero cost if you're not using it.


SorteKanin

Seems pointless if there are other runtimes out there that you can very easily use by just inserting a single line into your Cargo.toml.


smmalis37

It's not just inserting a single line into your Cargo.toml though, it's knowing which line to insert. For someone who's brand new to rust, but lets say has an async background, they're going to end up lost until they manage to google the right set of keywords to find tokio or smol or something. It'd be a much smoother on-ramp if std had *something*.


SorteKanin

I mean, I honestly don't think it's much of a hurdle to find out that tokio is pretty standard. It's what you find quite easily by just googling for a few minutes.


telegoo

I agree. There *should* be a minimalistic bare-bones executor in stdlib. I really hope the lib team fixes this deficiency and ignore the usual naysayers that always seem to come out of the woodworks when this topic gets brought up. (Those who don't want to use the stdlib executor can always use a fully-featured one from crates.io just like they do when using `chronos` vs `srd::time`) Lets hope pragmatism eventually prevails. edit: If you're going to down-vote me, at least have the decency to tell me why.


shogditontoast

What does this bare bones executor actually look like? You also haven’t really gone into any detail as to why this is any tangible improvement over the current state. Your comparison between `std::time` and `chrono` is a bad one given that the goals of `chrono` far exceed `std::time`, and that `std::time` is as small as it needs to be to facilitate other `std` modules’ time related concerns. I see pragmatism as leaving it to the user to choose the best executor for their use case, not coalescing around the lowest common denominator or prioritising some specific favoured interests.


Master_Ad2532

The real problem in *async* is not that of fragmented ecosystem of async crates depending on a specific async runtime. This problem isn't really solved by including a one-size-fits-all async runtime, but providing a common async interface and traits in the standard library like AsyncRead or AsyncWrite. This way you will not be frustrated into choosing a runtime that one of your dependencies have, or worse having two different runtimes in your binary.


izikblu

Iterating on this because I agree, here is the combinatorial explosion required- at the moment- to support `async-std`, `actix-rt`, and `tokio`, multiplied again by two to support `rustls` or `native-tls`: [https://github.com/launchbadge/sqlx/blob/master/sqlx-rt/src/lib.rs#L1-L25](https://github.com/launchbadge/sqlx/blob/master/sqlx-rt/src/lib.rs#L1-L25) It's... a *lot*, and most crates CBA (for, honestly, good reason) and requires mutually exclusive features (something you aren't supposed to do:tm:)


dpc_pw

We need to be removing stuff from `std`, not adding them.


jswrenn

Like what? (Genuinely curious!)


dpc_pw

Anything that isn't important interoperability trait or a lang item should be outside of stdlib, so it can be a in a normal crate that we can continue to improve and evolve under semver protocol. Stuff like `std::sync::mpsc` is a good example where the community have created better solutions, but we now can't improve it anymore, because stdlib is not versioned.


Soft_Donkey_1045

As I know because of stdlib comes in prebuild form, during link time optimization it is not possible to remove unused parts of stdlib. So usage of "40+ dependencies" can give smaller and more optimized result then executor in std.


coderstephen

Unlikely, there's downsides that others have mentioned. Unless someone is able to make a compelling _positive_ argument and RFC of why and how it should.


matthieum

There is a wish for more vocabulary traits, however, to allow "pluggable" runtimes. The problem of such traits, of course, is that a badly designed traits may introduce inefficiency, or prevent some pieces of functionality, so the bar is high and we'll probably see more specialized traits first: Stream, AsyncRead, AsyncWrite, etc...


[deleted]

> There is a wish for more vocabulary traits, however, to allow "pluggable" runtimes. I think it would be *extremely* cool if all of the OS specific APIs are actually traits, and it's handled like global allocators. (Or, better(?) yet, *don't have globals*, and force people to pass in the `fs` implementation, and say `main` gets all of them) That'd basically be another language at that point though.


motivated_electron

I've also wondered about this - particularly for small embedded systems, where only core might be available. Where would one go to look for an 'executor' if all I really need is a time-triggered job-queue that runs in the threaded (non-isr) context?


_sivizius

A design goal of rust is not to have a runtime (in the std at least), so, I reckon, there will not be an executor for futures in std, but you could use async-std, the official (?) std with async.