Bump. I pushed a batch of commits. They slightly improvement performance slightly and drastically reduce the binary size. I'm getting a ~48KiB binary on my system, ~44KiB of which is actual machine code).
Narann wrote:I just bounce on the "it could be great to support multiple compilers".
Maybe it's not time for that yet but when the time to support binary bundles to end user, I suggest you choose a compiler to support build for (bugfix, performance, etc...) and, if you want (debugging is a valid point), add compiling/project support for other compilers on the dev side.
Hmm, I agree that it would be confusing to have too many binaries. It's most likely that as krom
said, use GCC for all "official" builds (but provide Clang, MSVC, etc. on the side).
Narann wrote:2) Keep a consistent code and avoid a lot of #pragma or "compiler abstraction layer" headers* depending on compilers.
header contains just about all the compiler-specific wizardry that I hope to ever put in the project. There are two exceptions (though the first one is a big one!):
- RSP, RDP, VR4300 CP1, etc. backends -- these will use intrinsics (i.e,. SSE) and inline assembly.
- Compiler-specific machinery for doing 128-bit multiplies and divides.
Narann wrote:compiler abstraction layer" headers is an inefficient practice on a performance side (IMHO) because it considere every compiler work the same way and only some keywords changes.
I'm not one to often say this, but sometimes you have to sacrifice performance for design. It's not something I often do, but it has to be! If this project is going to support a wide-variety of operating systems and compilers, it's just too much book-keeping to not
have abstraction. If I didn't have some kind of compiler/architecture abstraction layers, it'd be a nightmare to maintain accuracy across all platforms, even with the world's best CI tools.
Narann wrote:Maybe you already think about this but the choice for the "main compiler" (the one you will focus performance sensitive work) is not something trivial. To be honnest, I would also push to choose one version of gcc to focus on an maybe update this version every year.
opposed to compiler-specific (and even worse IMO, version-specific) hacks. Although you can squeeze out a couple extra VI/s in some cases, those same "improvements" can later become regressions due to simple various code additions and bug fixes, even when compiled with the same compiler!
Ironically, CEN64 performance has tended to drift upwards with every release of GCC. I started development on 4.6 or 4.7, and have seen improvements through 4.8 and 4.9.
I've been thinking of ways to do automated regression testing, I have a few ideas
perhaps a distributed computing system with multiple "game player agents" running that make auto-generated compatibility database reports would be the best way of maintaining an up-to-date compatibility listing? It's certainly not above my pay-grade
All that would have to be worked out is some sort of metric system to determine if a given ROM gets to certain point, etc.
Typically, how I've done it with my bigger-class architectural simulators is to find a hole in the opcode space and implement an instruction that is undefined on the architecture (or toss something in the memory-mapped I/O space, whatever else -- purists are probably clenching their fists right now
). When the simulation encounters this instruction, simulation stops and control flows back to the host who them inspects the state of the simulation for any regressions in accuracy.
This apparatus could be extended with a timer that tracks the amount of time spent on each unit test and seeks out any performance regressions as well.