The Great Rational Explosion

A Dream to good to be true

A few years back I was doing mostly computational geometry for a while. In that field, floating point errors are often of great concern. Some algorithms will simply crash or fail when it’s not taken into account. Back then, the idea of doing all the required math using rationals seemed very alluring.
For the uninitiated: a good rational type based on two integers, a numerator and a denominator allows you to perform the basic math operations of addition, subtraction, multiplication and division without any loss of precision. Doing all the math without any loss of precision, without fuzzy comparisons, without imperfection.
Alas, I didn’t have a good rational type available at the time, so the thought remained in the realm of ideas.

A Dream come true?

Fast forward a couple of years to just two months ago. We were starting a new project and set ourselves the requirement of not introducing floating point errors. Naturally, I immediately thought of using rationals. That project is written in java and already using jscience, which happens to have a nice Rational type. I expected the whole thing to be a bit slower than math using build-in types. But not like this.
It seemed like a part that was averaging about 2000 “count rate” rationals was extremely slow. It seemed to take about 13 seconds, which we thought was way too much. Curiously, the problem never appeared when the count rate was zero. Knowing a little about the internal workings of rational, I quickly suspected the summation to be the culprit. But the code was doing a few other things to, so naturally my colleagues demanded proof that that was indeed the problem. Hence I wrote a small demo application to benchmark the problem.

The code that I measured was this:

Rational sum = Rational.ZERO;
for (final Rational each : list) {
    sum = sum.plus(each);
}
return sum;

Of course I needed some test data, that I generated like this:

final List<Rational> list = new ArrayList<>();
for (int i=0; i<2000; ++i) {
    list.add(Rational.valueOf(i, 100));
}
return list;

This took about 10ms. Bad, but not 13s catastrophic.

Now from using rational numbers in school, we remember that summing up numbers with equal denominators is actually quite easy. You just leave the denominator as is and add the two numerators. But what if the denominators are different? We need to find a common multiple of the two denominators before we can add. Usually we want the smallest such number, which is called the lowest common multiple (lcm). This is so that the numbers don’t just explode, i.e. get larger and larger with each addition. The algorithm to find this is to just multiply the two numbers and divide by their greatest common divisor (gcd). Whenever I held the debugger during my performance problems, I’d see the thread in a function called gcd. The standard algorithm to determine the gcd is the Euclidean Algorithm. I’m not sure if jscience uses it, but I suspect it does. Either way, it successively reduces the problem via a division to a smaller instance.

What does this all mean?

This means that much of the complexity involved happens only when there’s variation in the denominator. Looking at my actual data, I saw that this was the case for our problem. The numbers were actually close to one, but with the numerator and the denominator each close to about 4 million. This happened because the counts that we based this data on where “normalized” by a time value that was close, but not equal to one. So let’s try another input sequence:

final Random randomGenerator = new Random();
final List<Rational> list = new ArrayList<>();
for (int i=0; i<2000; ++i) {
    list.add(Rational.valueOf(4000000, 4000000 + randomGenerator.nextInt(2000)));
}
return list;

That already takes 10 seconds. Wow. Here’s the rational number it produced:

10925412090387826443356493315570970692092187751160448231723307006165619476894539076955047153673774291312083131288405937659569868481689058438177131164590263653191192222748118270107450103646129518975243330010671567374030224485257527751326361244902048868408897125206116588469496776748796898821150386049548562755608855373511995533133184126260366718312701383461146530915166140229600694518624868861494033238710785848962470254060147575168902699542775933072824986578365798070786548027487694989976141979982648898126987958081197552914965070718213744773755869969060335112209538431594197330895961595759802183116845875558763156976148877035872955887060171489805289872602037240200456259054999832744341644730504414071499368916837445036074038038173507351044919790626190261603929855676606292397018669058312742095714513746321779160111694908027299104638873374887446030780299702571350702255334922413606738293755688345624599836921568658488773148103958376515598663301740183540590772327963247869780883754669368812549202207109506869618137936835948373483789482539362351437914427056800252076700923528652746231774096814984445889899312297224641143778818898785578577803614153163690077765243456672395185549445788345311588933624794815847867376081561699024148931189645066379838249345071569138582032485393376417849961802417752153599079098811674679320452369506913889063163196412025628880049939111987749980405089109506513898205693912239150357818383975619592689319025227977609104339564104111365559856023347326907967378614602690952506049069808017773270860885025279401943711778677651095917727518548067748519579391709794743138675921116461404265591335091759686389002112580445715713768865942326646771624461518371508718346301286775279265940739820780922411618115665915206028180761758701198283575402598963356532479352810604578392844754856057089349811569436655814012237637615544417676166890247526742765145909088354349593431829508073735508662766171346365854920116894738553593715805698326801840647472004571022201012455368883190600587502030947401749733901881425019359516340993849314997522931836068574283213181677667615770392454157899894789963788314779707393082602321025304730355204512687710695657016587562258289968709342507303760359107314805479150337790244385189611378805094282650120553138575380568150214510972734241803176908917697662914714188030879994734853772797322420241420911735874903926141598416992690859929943631826094723456317312589265104334870907579391696178556354299428366394819280011410287891113591176612795009226826412471238783334239148961082442565804292473501012401378940718084589859443350905260282342990350362981901637062679381912861429756544396701574099199222399937752826106312708211791773562169940745686837853342547182813438086856565980815543626740277913678365142830117575847966404149038892476111835346566933160119385992791677587359063277202990220629004309670865867774206252830200897207368966439730136012024728717701204793182480513620275549665094200202565592742030772102704751736850897665353297536494739059325582661212315355306787427752670613324951121097833683795311514392922347268374097451268196257308005629903372871471809591087849716533132440301432155867780938535327925645340832637372702171777123816397448399703780105396941226655424025197472384099218081468916864256472238808237005121132164363385877692234230678011184351921814453560033879491735351402997266882544304106997065987376103362395437737475217181551336569975031721614790499945872209261769951117223344186839969922893394319287462384028859822057955389124951467203432571737865201780344423642467187208636881135573636815083891626138564337634176587161231028307776960866522346008589607259041199676560090157817882260300414906572885890188984036234226505815367029839231023461597364977306399898603903392434756572392816540125771578189640871020070756539777101197151773304409519870643142190955018579914630314940373832858007535828153361236115553577694543503842444481599944319287815162136101362705211937257383677282014480487759786222801447548899760241829116959865698836386442016721709983097509675552750221989521551169512674725876581185837611167980363615880958917421251873901289922888492447507837290336628975165062036681599909052030653421736716061426079882106810502703095803882805916960831442634085856041781093664688754713907512226706324967656091109936101526173370212867073380662492009726657437921033740063367290862521594119329592938626114166263957511012256023777676569002181500977475083845756500926631153405264250959378833667206532373995888322137324027620266863005721216133252342921697663864807284554205674829658250755046340838031118227643145562001361542532622713886266813492926885236832665609571019479812713355021295737820773552735161701716018010606040731647943600206193923458996150345093898644748170519757957470535978378479854546255651200511536560142431948781377187548218601919108870420102025378751015728281345799655926856602543107729659372984539588835599345223921737022220676709028150797109091782506736145801340069563865839397272145141831011878720095142353543406658905222847479419799336972983678887227301846161770296173667855239714987183774931791306562351516152727523242208973614372214119191610954209164193665209038399568256186789865817560942946289864468805486432823029457193832616134050058472575/5464071367689021145920376789564097484075057036929154325361292811121525852598101325663975897893163034606703637232625465056851133763148192531586105963101428155647567490545647427502233159652310898001125892675829093187577533366895512701590739143174316498475076791171065480136546725008720643752948237443019242161669077663609144434306128221503414531394096823915215979623953337930493605186601571769495144894636986997575880131117237208510857818613219593393080298986414277944012186244301930294333213815805316754678940597177696108355782072853392533822105101007621252067159549391148948251225551745385134586065334994558336331772298542454065874623247283672363609435360051294428160464673413148011783297358303182389731822629550376618544475198010869325105825675331154332311829064320240772190873197445422806128724029723364642376469088090535867746031708415054086042362900835830071439066729248803080864979591431807735444694059982355674331452510436222691302473693502522151731546119758216043039918795122874474779117841250524168339597048231340441394931963429195108468364711206679388129543777587115734312812004999951805288552516345754609724336541283126925634054615621607387407977448299658305191245949844785795188192329587881827885708672142850022336110188979606923986247270824545988992712993364088907981140326036171066083995899655482025878782401213185589085533281898090985040136079388970812804900747496605542168328539571899508434943602485570225905907644120282314027462195035830426406664327228058856521800556952281791943493734276488880843046612364784975328644388569123725347247071532065625240062618476483177429385912626131806870421725932016173249912942914992853735756415323674217279068679145528740177762103447741638280753518805769821177202801618302946904761889981101239141113424330262556585926281757788049928339119700279137278143586150134536970549327847433827708642630282821305576285238510965599311560018270085653975154329160951814845203648945184348538132528313719965960517692615234724933839415863834082163365616979319651204331888160659534780756175312928937580725581177783203569550625979825708835192412739747467314045215372710237518236109496682610777837094944201522879675742349874447002866525687741559327659019423524782141050034563799707395461801978917300210616390761873065433809737256362778865972956861423012424858347791919074000867019988914246185988825680768363450694954357708832975385526022445613099402197649574980990392753660371969220370462447697883319180076033708570759874351259644698980354866068784429511482751711329624930863629304545443308117342612677534958727764219861540475648492451312505519760969766401341016292089564143516344584197883268917604454917829927706134205489288603918892893943815239131020894621190795863599245502858307692284626604530452680973144088318876770254624083735638792267234407991769552946582638360068875509975769002847311018204789400549401560376453753063628076240879327047662612561558831502017122298425734701863181656884664851026481979959368508273072349808416302081774935491159815938813047108294310775621375475681754627108394755080076400344634873409823122370888409229666662703837096634538759103222727447751640114800586463155863216558367921497862783396136480309101772689723377535772396361300263399246855706948606216922101060308941294672554277063185792769730719177469014803853068307966008447841184403910224016078072279882935920347299431580783664128991812654780457701135185591782941754395263461459180940761928488860853515685525983743168631933069310771834569879795160576216776499263300025539821575767674288253419696114910586812284230709422230558006531335540918025940397918186306383352682903150594714754889631368556569576312855831657674738769087018659290558967429709446878600756119596020561987088170551193295467295484037656684414413021143993266548616775075905905925334335921152629477697210074629622902141063078657746149232034056642352265941452947845339051610301151130368316362022636711499114826671695540416257477345744126416363795225938900727969092692270788643742709421442552812735921534532156766228688341300473140115855278302077136595142902548557943030527129378779996403263450119984758775332533036452077654603781557033976641416371186050927154435643492983848051087282942559503555223474754690693742997370592039106889187781827888290578576160100274561900208861136548215743616634615083974725330617020874519606610812580339985836584711397102753448160446984238925180319975660479580098047098930179253806342651446549086560273369490291496196263969077175524002077318170194252067818804991902015489300563029607689935067689448990111925784018672159126316019672778424331051001652240822592173809393561688768265941132515679297537471384753992855568256114701369550476003237008840508060572457453896791033992685192069703595482787771374051189250065726225137532026227150630791972777126392521002221370419505711150022179216266239656298115575018443700688208844387102582445812545858014862427316785193110884751319467750425511273629581416588296942433219322216784262821066075700266485645060935996743388485096744598169509920624971167075214788838073469621687619355816573640360338756385397428237445511332615921133308459043700086925442114337299109227541364012352140312577797531234832237279430947774637533319353713938562360646441591033255036

I kid you not, that’s over 10000 digits! In the editor I’m writing this in, that’s roughly 3 pages. No wonder it took that long. Let’s use even more variation in the numbers:

final Random randomGenerator = new Random();
final List<Rational> list = new ArrayList<>();
for (int i=0; i<2000; ++i) {
    list.add(Rational.valueOf(4000000 + randomGenerator.nextInt(5000),
            4000000 + randomGenerator.nextInt(20000)));
}

Now that already takes 16 s, with about 14000 digits. Oh boy. Now the maximum number of values I expected to do this averaging for was about 4000, so let’s scale that up:

final Random randomGenerator = new Random();
final List<Rational> list = new ArrayList<>();
for (int i=0; i<4000; ++i) {
    list.add(Rational.valueOf(4000000 + randomGenerator.nextInt(5000),
            4000000 + randomGenerator.nextInt(20000)));
}
return list;

That took 77 seconds! More than 4 times as long as for half the amount of data. The resulting number has over 26000 digits. Obviously, this scales way worse than linear.

An Explanation

By now it was pretty clear what was happening: The ever so slightly not-1 values were causing an “explosion” in the denominator after all. When two denominators are coprime, i.e. their greatest common divisor is 1, the length of the denominators just adds up. The effect also happens when the gcd is very small, such as 2 or 3. This can happen quite a lot with huge numbers in a sufficiently large range. So when things go bad for your input data, the length of the denominator just keeps growing linearly with the number of input values, making each successive addition slower and slower. Your rationals just exploded.

Conclusion

After this, it became apparent that using rationals was not a great idea after all. You should be very careful when doing series of additions with them. Ironically, we were throwing away all the precision anyways before presenting the number to a user. There’s no way for anyone to grok a 26000 digit number anyways, especially if the result is basically 4000.xx. I learned my lesson and buried the dream of perfect arithmetic. I’m now using fixed point arithmetic instead.

Integration Tests with CherryPy and requests

CherryPy is a great way to write simple http backends, but there is a part of it that I do not like very much. While there is a documented way of setting up integration tests, it did not work well for me for a couple of reasons. Mostly, I found it hard to integrate with the rest of the test suite, which was using unittest and not py.test. Failing tests would apparently “hang” when launched from the PyCharm test explorer. It turned out the tests were getting stuck in interactive mode for failing assertions, a setting which can be turned off by an environment variable. Also, the “requests” looked kind of cumbersome. So I figured out how to do the tests with the fantastic requests library instead, which also allowed me to keep using unittest and have them run beautifully from within my test explorer.

The key is to start the CherryPy server for the tests in the background and gracefully shut it down once a test is finished. This can be done quite beautifully with the contextmanager decorator:

from contextlib import contextmanager

@contextmanager
def run_server():
    cherrypy.engine.start()
    cherrypy.engine.wait(cherrypy.engine.states.STARTED)
    yield
    cherrypy.engine.exit()
    cherrypy.engine.block()

This allows us to conviniently wrap the code that does requests to the server. The first part initiates the CherryPy start-up and then waits until that has completed. The yield is where the requests happen later. After that, we initiate a shut-down and block until that has completed.

Similar to the “official way”, let’s suppose we want to test a simple “echo” Application that simply feeds a request back at the user:

class Echo(object):
    @cherrypy.expose
    def echo(self, message):
        return message

Now we can write a test with whatever framework we want to use:

class TestEcho(unittest.TestCase):
    def test_echo(self):
        cherrypy.tree.mount(Echo())
        with run_server():
            url = "http://127.0.0.1:8080/echo"
            params = {'message': 'secret'}
            r = requests.get(url, params=params)
            self.assertEqual(r.status_code, 200)
            self.assertEqual(r.content, "secret")

Now that feels a lot nicer than the official test API!

Let’s talk about C++

It’s almost time for the holidays again. A time to reminisce. A time for family. A time for community.

Us software developers seem like an odd folk. We spend endless hours tinkering with our machines and gadgets. It appears like a lonely profession to outsiders. And it can be. Sometimes we have to get in The Zone to solve our tasks and problems. Other times we need to have sword fights. But sometimes we just have to meet other developers.

I’m not talking about your 10 o’clock daily standup or agile flavor-of-the-month meeting with other departments. Those are great. But sometimes it just has to be us programmers, as tech people.

Let’s talk about cool and tricky algorithms. Let’s talk about the latest and greatest language features that make all code some much cooler. Let’s talk which editor is the greatest. All the technical details.
It’s not necessarily the most important and essential aspect of our craft, no. But it’s kind of like the seasoning to a well cooked meal. It’s flavor and character. It’s fun.

I’m the C++ guy. It’s not the only one of my specialties, but kind of what I got a bit of a reputation for. And I like to talk about it. So far, this was either limited to colleagues and friends or “out there” on IRC, stackoverflow or other online communities. But I want to extend that and be a more active member of the local community.
David Farago had the great idea to create a platform for this in Karlsruhe: The C++ User Group Karlsruhe. He asked me to kindly extend an invitation. The kick-off is next month, right at the start of the new year, on the 11th of January, with one meeting scheduled every month. I think this is a perfect time to do this. C++ is in a great place right now. The language is evolving in a very positive way and the ecosystem is looking better and better.
So if you’re in any way interested meeting other local C++ people, please join us. I’m very much looking forward to meeting you guys!

Why I’m not using C++ unnamed namespaces anymore

Well okay, actually I’m still using them, but I thought the absolute would make for a better headline. But I do not use them nearly as much as I used to. Almost exactly a year ago, I even described them as an integral part of my unit design. Nowadays, most units I write do not have an unnamed namespace at all.

What’s so great about unnamed namespaces?

Back when I still used them, my code would usually evolve gradually through a few different “stages of visibility”. The first of these stages was the unnamed-namespace. Later stages would either be a free-function or a private/public member-function.

Lets say I identify a bit of code that I could reuse. I refactor it into a separate function. Since that bit of code is only used in that compile unit, it makes sense to put this function into an unnamed namespace that is only visible in the implementation of that unit.

Okay great, now we have reusability within this one compile unit, and we didn’t even have to recompile any of the units clients. Also, we can just “Hack away” on this code. It’s very local and exists solely to provide for our implementation needs. We can cobble it together without worrying that anyone else might ever have to use it.

This all feels pretty great at first. You are writing smaller functions and classes after all.

Whole class hierarchies are defined this way. Invisible to all but yourself. Protected and sheltered from the ugly world of external clients.

What’s so bad about unnamed namespaces?

However, there are two sides to this coin. Over time, one of two things usually happens:

1. The code is never needed again outside of the unit. Forgotten by all but the compiler, it exists happily in its seclusion.
2. The code is needed elsewhere.

Guess which one happens more often. The code is needed elsewhere. After all, that is usually the reason we refactored it into a function in the first place. Its reusability. When this is the case, one of these scenarios usually happes:

1. People forgot about it, and solve the problem again.
2. People never learned about it, and solve the problem again.
3. People know about it, and copy-and-paste the code to solve their problem.
4. People know about it and make the function more widely available to call it directly.

Except for the last, that’s a pretty grim outlook. The first two cases are usually the result of the bad discoverability. If you haven’t worked with that code extensively, it is pretty certain that you do not even know that is exists.

The third is often a consequence of the fact that this function was not initially written for reuse. This can mean that it cannot be called from the outside because it cannot be accessed. But often, there’s some small dependency to the exact place where it’s defined. People came to this function because they want to solve another problem, not to figure out how to make this function visible to them. Call it lazyness or pragmatism, but they now have a case for just copying it. It happens and shouldn’t be incentivised.

A Bug? In my code?

Now imagine you don’t care much about such noble long term code quality concerns as code duplication. After all, deduplication just increases coupling, right?

But you do care about satisfied customers, possibly because your job depends on it. One of your customers provides you with a crash dump and the stacktrace clearly points to your hidden and protected function. Since you’re a good developer, you decide to reproduce the crash in a unit test.

Only that does not work. The function is not accessible to your test. You first need to refactor the code to actually make it testable. That’s a terrible situation to be in.

What to do instead.

There’s really only two choices. Either make it a public function of your unit immediatly, or move it to another unit.

For functional units, its usually not a problem to just make them public. At least as long as the function does not access any global data.

For class units, there is a decision to make, but it is simple. Will using preserve all class invariants? If so, you can move it or make it a public function. But if not, you absolutely should move it to another unit. Often, this actually helps with deciding for what to create a new class!

Note that private and protected functions suffer many of the same drawbacks as functions in unnamed-namespaces. Sometimes, either of these options is a valid shortcut. But if you can, please, avoid them.

Remote development with PyCharm

PyCharm is a fantastic tool for python development. One cool feature that I quite like is its support for remote development. We have quite a few projects that need to interact with special hardware, and that hardware is often not attached to the computer we’re developing on.
In order to test your programs, you still need to run it on that computer though, and doing this without tool support can be especially painful. You need to use a tool like scp or rsync to transmit your code to the target machine and then execute it using ssh. This all results in painfully long and error prone iterations.
Fortunately, PyCharm has tool support in its professional edition. After some setup, it allows you do develop just as you would on a local machine. Here’s a small guide on how to set it up with an ubuntu vagrant virtual machine, connecting over ssh. It work just as nicely on remote computers.

1. Create a new deployment configuration

In the Tools->Deployment->Configurations click the small + in the top left corner. Pick a name and choose the SFTP type.
add_server

In the “Connection” Tab of the newly created configuration, make sure to uncheck “Visible only for this project”. Then, setup your host and login information. The root path is usually a central location you have access to, like your home folder. You can use the “Autodetect” button to set this up.

connection
For my VM, the settings look like this.

On the “Mappings” Tab, set the deployment path for your project. This would be the specific folder of your project within the root you set on the previous page. Clicking the associated “…” button here helps, and even lets you create the target folder on the remote machine if it does not exist yet.

2. Activate the upload

Now check “Tools->Deployment->Automatic Upload”. This will do an upload when you change a file, so you still need to do the initial upload manually via “Tools->Deployment->Upload to “.

3. Create a project interpreter

Now the files are synced up, but the runtime environment is not on the remote machine. Go to the “Project Interpreter” page in File->Settings and click the little gear in the top-right corner. Select “Add Remote”.

remote_interpreter
It should have the Deployment configuration you just created already selected. Once you click ok, you’re good to go! You can run and debug your code just like on a local machine.

Have fun developing python applications remotely.

C++ Coroutines on Windows with the Fiber API

Last week, I had the chance to try out coroutines as a way to cooperatively interleave long tasks with event-processing. Unlike threads, where you can have interaction between between threads at any time, coroutines need to yield control explicitly, whic arguably makes synchronisation a little simpler. Especially in (legacy) systems that are not designed for concurrency. Of course, since coroutines do not run at the same time, you do not get the perks from concurrency either.

If you don’t know coroutines, think of them as functions that can be paused and resumed.

Unlike many other languages, C++ does not have built-in support for coroutines just yet. There are, however, several alternatives. On Windows, you can use the Fiber API to implement coroutines easily.

Here’s some example code of how that works:

auto coroutine=make_shared<FiberCoroutine>();
coroutine->setup([](Coroutine::Yield yield)
{
  for (int i=0; i<3; ++i)
  {
    cout << "Coroutine " 
         << i << std::endl;
    yield();
  }
});

int stepCount = 0;
while (coroutine->step())
{
  cout << "Main " 
       << stepCount++ << std::endl;
}

Somewhat surprisingly, at least if you have never seen coroutines, this will output the two outputs alternatingly:

Coroutine 0
Main 0
Coroutine 1
Main 1
Coroutine 2
Main 2

Interface

Since fibers are not the only way to implement coroutines and since we want to keep our client code nicely insulated from the windows API, there’s a pure-virtual base class as an interface:

class Coroutine
{
public:
  using Yield = std::function<void()>;
  using Run = std::function<void(Yield)>;

  virtual ~Coroutine() = default;
  virtual void setup(Run f) = 0;
  virtual bool step() = 0;
};

Typically, creation of a Coroutine type allocates all the resources it needs, while setup “primes” it with an inner “Run” function that can use an implementation-specific “Yield” function to pass control back to the caller, which is whoever calls step.

Implementation

The implementation using fibers is fairly straight-forward:

class FiberCoroutine
  : public Coroutine
{
public:
  FiberCoroutine()
  : mCurrent(nullptr), mRunning(false)
  {
  }

  ~FiberCoroutine()
  {
    if (mCurrent)
      DeleteFiber(mCurrent);
  }

  void setup(Run f) override
  {
    if (!mMain)
    {
      mMain = ConvertThreadToFiber(NULL);
    }
    mRunning = true;
    mFunction = std::move(f);

    if (!mCurrent)
    {
      mCurrent = CreateFiber(0,
        &FiberCoroutine::proc, this);
    }
  }

  bool step() override
  {
    SwitchToFiber(mCurrent);
    return mRunning;
  }

  void yield()
  {
    SwitchToFiber(mMain);
  }

private:
  void run()
  {
    while (true)
    {
      mFunction([this]
                { yield(); });
      mRunning = false;
      yield();
    }
  }

  static VOID WINAPI proc(LPVOID data)
  {
    reinterpret_cast<FiberCoroutine*>(data)->run();
  }

  static LPVOID mMain;
  LPVOID mCurrent;
  bool mRunning;
  Run mFunction;
};

LPVOID FiberCoroutine::mMain = nullptr;

The idea here is that the caller and the callee are both fibers: lightweight threads without concurrency. Running the coroutine switches to the callee’s, the run function’s, fiber. Yielding switches back to the caller. Note that it is currently assumed that all callers are from the same thread, since each thread that participates in the switching needs to be converted to a fiber initially, even the caller. The current version only keeps the fiber for the initial thread in a single static variable. However, it should be possible to support this by replacing the single static fiber pointer with a map that maps each thread to its associated fiber.

Note that you cannot return from the fiberproc – that will just terminate the whole thread! Instead, just yield back to the caller and either re-use or destroy the fiber.

Assessment

Fiber-based coroutines are a nice and efficient way to model non-linear control-flow explicitly, but they do not come without downsides. For example, while this example worked flawlessly when compiled with visual studio, Cygwin just terminates without even an error. If you’re used to working with the visual studio debugger, it may surprise you that the caller gets hidden completely while you’re in the run function. The run functions stack completely replaces the callers stack until you call yield(). This means that you cannot find out who called step(). On the other hand, if you’re actually doing a lot of processing in the run function, this is quite nice for profiling, as the “processing” call tree seemingly has its own root in the call-tree.

I just wish the visual studio debugger had a way to view the states of the different fibers like it has for threads.

Alternatives

  • On Linux, you can use the ucontext.
  • Visual Studio 2015 also has another, newer, implementation.
  • Coroutines can be implemented using threads and condition-variables.
  • There’s also Boost.Coroutine, if you need an independent implementation of the concept. From what I gather, they only use Fibers optionally, and otherwise do the required “trickery” themselves. Maybe this even keeps the caller-stack visible – it is certainly worth exploring.
  • Generating a spherified cube in C++

    In my last post, I showed how to generate an icosphere, a subdivided icosahedron, without any fancy data-structures like the half-edge data-structure. Someone in the reddit discussion on my post mentioned that a spherified cube is also nice, especially since it naturally lends itself to a relatively nice UV-map.

    The old algorithm

    The exact same algorithm from my last post can easily be adapted to generate a spherified cube, just by starting on different data.

    cube

    After 3 steps of subdivision with the old algorithm, that cube will be transformed into this:

    split4

    Slightly adapted

    If you look closely, you will see that the triangles in this mesh are a bit uneven. The vertical lines in the yellow-side seem to curve around a bit. This is because unlike in the icosahedron, the triangles in the initial box mesh are far from equilateral. The four-way split does not work very well with this.

    One way to improve the situation is to use an adaptive two-way split instead:
    split2

    Instead of splitting all three edges, we’ll only split one. The adaptive part here is that the edge we’ll split is always the longest that appears in the triangle, therefore avoiding very long edges.

    Here’s the code for that. The only tricky part is the modulo-counting to get the indices right. The vertex_for_edge function does the same thing as last time: providing a vertex for subdivision while keeping the mesh connected in its index structure.

    TriangleList
    subdivide_2(ColorVertexList& vertices,
      TriangleList triangles)
    {
      Lookup lookup;
      TriangleList result;
    
      for (auto&& each:triangles)
      {
        auto edge=longest_edge(vertices, each);
        Index mid=vertex_for_edge(lookup, vertices,
          each.vertex[edge], each.vertex[(edge+1)%3]);
    
        result.push_back({each.vertex[edge],
          mid, each.vertex[(edge+2)%3]});
    
        result.push_back({each.vertex[(edge+2)%3],
          mid, each.vertex[(edge+1)%3]});
      }
    
      return result;
    }
    

    Now the result looks a lot more even:
    split2_sphere

    Note that this algorithm only doubles the triangle count per iteration, so you might want to execute it twice as often as the four-way split.

    Alternatives

    Instead of using this generic of triangle-based subdivision, it is also possible to generate the six sides as subdivided patches, as suggested in this article. This approach works naturally if you want to have seams between your six sides. However, that approach is more specialized towards this special geometry and will require extra “stitching” if you don’t want seams.

    Code

    The code for both the icosphere and the spherified cube is now on github: github.com/softwareschneiderei/meshing-samples.