"This series kill the old i386 and x86_64 directories. The relevant files are moved and adapted and Kconfig.debug was consolidated (thanks to Randy)," Sam Ravnborg said, describing a set of 6 patches to finish the migration of physical files into the new x86 architecture directory. He described it as "a nice patch series that deletes more lines than it adds," going on to explain:
"I had to modify both the top-level Makefile and the kconfig Makefile to accomplish this. It was done in such a way that it is trivial for other archs to use the same mechanism should they have the need.
"To solve the defconfig issue (i386 and x86_64 cannot share one) the arch/x86/configs/ directory were introduced. This has been used by other archs for some time now but x86 had not had the need until now."
"Can we please finish up this merge a little more before we freeze 2.6.24? The way we currently have leftovers of arch/i386/ and arch/x86_64/ is quite a nightmare and not how the other architectures were merged," Christoph Hellwig asked, leading to an insightful reply by Ingo Molnar. Ingo began by noting, "to answer that question one should first be aware of the fundamental code quality problems that the unified x86 architecture has inherited from the split i386 and x86_64 architectures." He then utilized the
checkpatch script to generate a table of "coding style errors per one thousand lines of source code". In his table,
arch/i386/ rated 77.3 errors per thousand lines of source, with
arch/x86_64/ rating 96.0. The new unified
arch/x86/ rated a lower but still very high 74.1. He summarized, "it is plainly obvious that the x86_64 and i386 architectures were in a dreadful state of code quality before the unification. Their code quality was almost an order of magnitude worse than that of the core kernel (!) - and their code quality was significantly worse than that of a couple of other, comparable architectures." Ingo continued:
"So to answer your question: full unification is no easy task and it is not automatic at all. The x86_64 tree has diverged from the i386 tree in the past 5 years due to their illogical, forced separation and a resulting bitrot. The two architectures have grown different sets of cleanliness problems and different sets of functions with arbitrary differences that often cover the same functionality. It's all compounded by the fact that the 64-bit code is in worse shape than the 32-bit - so it's not like we could just pick the 64-bit code and use that as the unified code. The 32-bit code is also used about 8-10 times more frequently than the 64-bit code. So there is no easy 'just unify it' path."
Sam Ravnborg took a look at the x86 unification patches and commented, "from the mails and discussions I expected it be be obvious what was i386 only, what was shared and what was x86_64 only." He listed 16 files in
x86/pci and noted, "in the filename there is NOTHING for most of the non-shared code that tell that this file is used by only i386 or x86_64." Andi Kleen concurred, "exactly my point from KS. The big mash-up will not really make much difference in terms of Makefile clarity or whatever Thomas' point was. Apparently he wanted to eliminate a few lines of code from the Makefile and merge the header files completely?"
Linus Torvalds disagreed saying, "the problem right now is the *reverse* - even though they are in different subdirectories (and thus *look* like they are all separate), they aren't. So the merged end result is much better: at a first approximation everything is shared (largely true), and the ones that aren't shared can be made more obvious." He added, "at least things like "grep" will work sanely, and people will be *aware* that 'Oh, this touches a file that may be used by the other word-size'." Linus continued:
"Right now, we have people changing 'i386-only' files that turn out to be used by x86-64 too - through very subtle Makefile things that the person who only looks into the i386 Makefile will never even *see*.
"THAT is the problem (well, at least part of it)."
Thomas Gleixner described an effort to create a unified x86 architecture tree, "the core idea behind our project is simple to describe: we introduce a new arch/x86/ and include/asm-x86/ file hierarchy that includes all the existing 32-bit and 64-bit x86 code and allows the building of either a 32-bit (i386) kernel or a 64-bit (x86_64) kernel." Andi Kleen expressed some concern, "I think it's a bad idea because it means we can never get rid of any old junk. IMNSHO arch/x86_64 is significantly cleaner and simpler in many ways than arch/i386 and I would like to preserve that. Also in general arch/x86_64 is much easier to hack than arch/i386 because it's easier to regression test and in general has to care about much less junk. And I don't know of any way to ever fix that for i386 besides splitting the old stuff off completely." Additional concerns about legacy issues were countered by Linus Torvalds, "there really isn't that much legacy crud. There are things like random quirks, but every time I hear the (theoretical) argument about how much time and effort we save by having it duplicated somewhere else, I think about all the time we definitely waste by fixing the same bug twice (and worry about the cases where we don't)." Among the justifications for a unified architecture, Thomas noted:
"We believe that the whole x86 CPU family is very much related and should be supported in a single architecture tree. All 64-bit CPUs implement the ability to execute pure 32-bit kernels, and will probably do so for the next couple of decades. So it's not like it will ever be possible to get rid of our legacies: for example even the latest 64-bit CPUs implement the legacy 'A20 line' feature that was already a weird outdated hack in the days of 16-bit x8086 CPUs."
Included in Andrew Morton's potential 2.6.23 merge list [story] were a series of patches to make the x86-64 architecture tickless. Andi Kleen, the x86-64 maintainer replied, "I'm sceptical about the dynticks code. It just rips out the x86-64 timing code completely, which needs a lot more review and testing. Probably not .23." Linus Torvalds agreed, "we are *not* going to do another 'rip everything out, and replace it with new code' again. Over my dead body. We're going to do this thing gradually, or not at all." He went on to explain "the patch-set itself actually looks fine, as far as I'm concerned. But it does seem to have that 'enable everything in one go' problem. I'd much rather see one time source at a time being converted, and enabled then and there, so that when people report problems and do a bisection, if it was HPET that broke, you get the commit that changed HPET."
In response to the pains caused by the original dyntick merge in 2.6.21, Ingo Molnar acknowledged, "we had 12 -hrt/dynticks merge related regressions between 2.6.21-rc1 and -final, and 4 after final." He went on to point out, "it's all pretty quiet today on the dynticks regressions front. (there are no open regressions in either the upstream i386 code or in the devel patches we are aware of)." As to the source of the bugs, he explained, "the majority of the above bugs were in the infrastructure code. (the worst was the generic resume/suspend one fixed in 220.127.116.11) And sadly, a fair number of the infrastructure bugs we introduced during the frentic clockevents/dynticks rewrites/redesigns we did between .20 and .21. That was a royally stupid mistake for us to do - instead of patiently waiting for the bugs to be shaken out we destabilized the infrastructure. (it was a 'lets make this thing so nice that it's impossible to reject' instintic gut reaction.)" Linus replied, "one thing I'll happily talk about is that while 2.6.21 was painful, you and Thomas in particular were both very responsible about the thing, so no, I'm not at all complaining or worried about it in that sense! I just really _really_ wish we could have two fairly stable releases in a row. I think 2.6.22 has the potential to be a pretty good setup, and I'd really like to avoid having another 2.6.21 immediately afterwards."