but what do you define as powerful?
performance with compatibility (runs everything (not on everything))

and you say python is more powerful than actionscript whereas I could show you that it could do a similar job.
woah there, I never said "more"
meaning ActionScript is more comparable to Python, which in itself is far from as powerful as expected by the OP.
I did point out that python can be compiled "unlike ActionScript"

You also said android can't run ASM
I stated I'm not sure which ASM opcodes Andriod runs on (PPC or x86_64)
your article doesn't list either

Again did you google adobe air files?
I see you can package a .air as a .exe, but the process looks very shady...
about as shady as packaging a .jar in a .exe.

if the exe can run natively then the language might have some sort of power, but I wouldn't call it anything better than what java could do. :/

I'd like to know why I should switch to a pc.

Wait have you heard of the mac pro. Have you seen the specs? Have you seen the benchmarks?

no, I hate apple and have no interest in their products.
I prefer to customize and install additional components,
rather than be limited with solid half-baked hardware that follows fixed standards.

don't take any of this about mac as hard facts because truthfully, this is all complaints from other people.

I hate apple enough as is to not use it because I like my freedoms.
(everyone I know who has an apple product has hacked it to where they could customize it to the point of installing linux on it)

I never hear anything truely good about apple devices that current devices can't already do.

I do hear about Apple constantly limiting more standards.

I hear about Apple sucking so much, I'm sick of hearing it.

prove to me, in simplistic terms where blender runs better on a pc then on a mac? What parts of blender, rendering/animation.
I'm not saying nothing on this until I know the differences

but again, it all lies in the hardware.

and I'm not saying anything specific, just Blender in general as an example.

I'm glad you're a hardcore blender user, I am too.
I have nothing against blender.
but I have everything against half-baked hardware.
(Apple's not the only one at fault here, ever used a Dell or Compaq PC)

don't get me wrong, Blender runs really well on my secondary PC which is that IntelGFX I mentioned earlier.
and that's only a single-core x86 intel Celeron 2.93GHz CPU; P4GV-LA MBd.
(yea, it's a peice of trash, but it runs well) :P

I'm sure your Mac could most likely kill that PC.

Wow.. there's been a lot of activity here since I last checked...

@Mike The original song is "Play with fire" by The Rolling Stones

I know, but I like Kandle's version better.

if you read the comments on most of them, the downvote is because of the ignorance.

I've learned that it's better to lead people to discover their own ignorance than trying to mock it or reprimand it.

APott is the one who married D

I'm just gonna say... 50% of marriages end in divorce.. ;)

there's only 1 language that's more powerful than C, GLSL

Really?? GLSL is only for vertex and fragment shaders in OpenGL, and it is a subset of C. I would hardly consider that more powerful.

I think that what you meant to say was "GPGPU" (General Purpose computing on GPUs) with things like CUDA, OpenCL, C++ AMP, OpenACC, etc.. These are essentially extensions (and some restrictions) to C and/or C++ to be able to compile the programs to run on GPUs or a blend of CPU/GPU. What is most awesome here is the parallelism you get (if you do it correctly, which is tricky). OpenMP and Intel TBB are also great tools.

And one important thing to understand here is that these extensions or libraries are part of the C/C++ language(s), in the sense that they are part of the argument about how powerful C/C++ is. In other words, you can write normal C++ code, add a few annotations to it, compile it for a GPGPU or distributed target, and get a 1000x speed-up.... Mind. Blown.

the GPU isn't recursive though

Good code is rarely recursive, if ever. Most GPGPU languages ban it, instead of permitting something that is nearly always wrong, and even when it's not wrong, you can easily live without it.

the only other reason is the amount of cores the GPU has which now lies above 900

I think that the GPGPU community mostly talks about "computing units", because GPUs aren't really structured in a classic "multi-core" paradigm. It's more like each "core" is actually capable of performing dozens or even hundreds of instructions in one cycle, in bulk. So, it's not really a good mental picture to see them as multiple cores.

So, for example, when you have a GLSL shader that is applied to a buffer of pixels, it's not like you have 100 threads each executing the same shader program on a different pixel, but instead, it's like you have the shader program executes each instruction 100 times at once on 100 pixels. So, it's really one program that executes one a huge number of computing units at once. We generally only talk about cores when they are doing (or could be doing) independent work (and most importantly, with independent L1 cache).

This is wrong because I don't have a GPU on my mac mini

The Mac Mini definitely has a GPU. Different versions of the Mac Mini have all had GPUs, in fact, they've had all vendors, from integrated Intel graphics, to Nvidia, to ATI Radeon. And btw, you don't have to have a dedicated graphics card in order to have a GPU. The GPU is just the graphics processor chip, which, on integrated graphics solutions (like Intel's), that processor just sits on the same board as the CPU, and shares its RAM.

I have yet to run into a modern laptop (WinXP+) w/o a GPU.

Anything that has either a display or a plug for a display (like an VGA/DVI/HDMI port) certainly has a GPU. I don't think that there are any modern computer without a GPU (from the last decade), aside from server rack computers. For crying out loud, even smartphones and small embedded chips have GPUs these days.

and I don't look up to Apple, I don't even think they know what shaders are. :P

Oh... I think they do. Remember, Apple doesn't use DirectX, they only use OpenGL. They've been very involved in its development and in backing it, and in petitioning companies like Nvidia and AMD/ATI for good support and adding features, including shaders.

I'm actually curious now if any macs at all have a GPU >.>

They all do. And, if I'm not mistaken, I think that part of their early success in the graphics design and architecture business was due to the superior graphics, as granted by their early adoption of GPU chips (sharing RAM with the CPU). But I couldn't find a reference for that.

And these days, Mac mostly use AMD or Nvidia graphics cards in their computers. Of course, the "Mac Marketing" rarely includes those nitty-gritty hardware details, but you can easily look it up, as I just did.

prove to me, in simplistic terms where blender runs better on a pc then on a mac?

I think the core of the issue is just that you could build yourself a beefed up awesome PC that would blow anything else out of the water. With Macs, you just gotta live with whatever is the thing Apple put out.

And then, for the same PC (hardware), the difference between running OSX versus Windows is not something that I know much about in terms of performance. The OpenGL benchmarks that are easily found by a google search seem to put Mac OS X at a slight disadvantage over Windows, but not always, and with Linux usually falling somewhere in between. But a lot of that comes down to platform-specific optimizations, which is where Windows benefits from more programmers optimizing graphics code for it.

And all that does not mean that Blender won't work perfectly fine on a Mac.

if I could +1000 you, I'd do it 10 times :3

I'm just gonna say... 50% of marriages end in divorce.. ;)
Amen to that! XDD

and thanks for correcting me about macs :)
but it looks like I was right about the fact they're just like a Dell PC :P

I was really questioning my statement about macs not having GPUs, and yea, it'd make little sense if they didn't have one to render those fancy gfx.

I'll give them a +1 for hardcore OpenGL support. :3

but about windows vs mac/linux, most of the linux builds of windows programs (such as chrome) run slightly faster and use slightly less RAM.

I'm using Xubuntu 14.04 configured to look alot like WinXP <3 <3

I really like that info about the gpu "cores" :D
(this is the first time I've heard them called something other than that)

you can write normal C++ code, add a few annotations to it, compile it for a GPGPU or distributed target, and get a 1000x speed-up.... Mind. Blown.
that's exactly the power I was talking about :)

gpgpu though, that certainly rings a bell with me... (python shader code)
though that was when I was looking into PyCUDA and PyStream and trying to get the compiler working.
it's a shame Nick Bray doesn't provide any working examples, just his working shader code =3=

btw, PyStream is a Python sub-set of GLSL translated back into GLSL before being compiled, so it's a very limited python that's equally as powerful as GLSL. ;)

I think GPU programming is the future although you're probably limited to how many you can link together, also remember a GPU has a memory limit.

If you've ever used cycles, which harnessed the power of cuda on nvidia you can't render large scenes on the GPU, which is why sometimes you get a black scene... And yes, as I stated before in the thread by mac mini doesn't have a GPU I meant a decent one, specifically a cuda enabled nvidia as this is where all the serious programming has been spent. Thus cycles and octane rendering engine.

As to blender running better on windows than macs, hypothetically speaking, you can't daisy chain 3 780GTX as you can on a pc -but this is only the render process nothing else, and a lot of youtube videos have set out to prove a similar pc can outperform a mac pro for the same money and all tests prove this incorrect.

If you put a nvidia titan in your mac pro you're using blender with the top 1% I would say. Installing and setting this up on a mac pro is indeed more difficult than a pc.

@mike have you ever done GPU programming?

@mike have you ever done GPU programming?

Not too much lately. In the old days, when I was first learning to program, I used to do a lot of 3D graphics game stuff (very amateurish, of course). But in those days, pixel shaders did not exist yet, so it was mostly software-based + fixed-pipeline (e.g., like doing the texture blending on the CPU and then feeding it through to the GPU on OpenGL's fixed pipeline functions). But I did manage to do a few neat thing even with only that, display-lists were pretty much the fanciest thing available (now they're pretty much obsolete, afaik).

It got a lot more fun when shaders started to appear, and I did dabble a bit with that, but things were still pretty basic back then (most graphics cards supported 2 multi-textures, maybe 4 if you were lucky, and GLSL was a pretty restrictive subset of C, and there wasn't anything fancy like render-to-texture or VBOs). But at that point, I moved on to other things like robotics, dynamic simulations, control software, artificial intelligence and so on... I don't really do 3D rendering directly anymore (for the few 3D stuff that I do, I use Coin3D, which serves my purposes just fine (I don't need fancy effects)).

I would love to do some GPGPU, but I just haven't found the time or purpose for it yet. Part of the problem is that those kinds of parallel computations are difficult to do because most (current) algorithms are not naturally parallelizable, and it requires quite a bit of analytical work to make that transformation, not to mention that it poses some software engineering challenges too. But if I stumbled upon some easily parallelizable algorithm that is both nice and purposeful, I would jump right in it.

Python is pretty gritty when it comes to GLSL as well...
documentation is quite lacking when it comes to learning GLSL...

I've been doing some pretty amazing work with display lists, though yes, they ARE deprecated...
good news though is Nvidia continues to support them in their hardware. :)

I still want to use GLSL though for efficiency reasons and BECAUSE I'm a python 2.7 programmer... :P

@mike: you're into AI too :o
cool =3
if you've ever heard me talk about A.I.I., it's for games...
(giving bots (including in-game animals) natural behaviors)
it's a thing for my game system... heh
it actually does much more than that and even doubles as a sort-of CleverBot with a much more complex backend... bleh

that's just some of the small stuff I'm looking at though :P

so yea, good to hear! =3
hopefully we could meet some time :3

I would love to do some GPGPU

Yes, sorry I should have been more specific, I did mean GPGPU programming, and not your regular directx or opengl stuff.

A lot of the good stuff with blender cycles started with GPGPU programming, and it seems only nvidia has been the standard here, if you check out luxrender this was one of the first unbiased render engine to harness the GPU, then along came octane, which was bought by otoy.

The guy who was originally writing octane left and I believe he contributed to much of the source code that blender cycles now sits on although, no one will admit to this of course.

@tcll if you use blender as well you can check out my resources on blenderartists.


And have you ever done any game engine tests in blender with python?

I havn't really done anything too grand with blender...
I started getting into working on a series called "Super Smash Bros. Revenge of the Forgotten" but never really got in depth with it because I didn't have a decent file converter...

so I've never really gotten too in-depth with blender as I've been working on my main program "Universal Model Converter" and a few side projects for it:
- UMC_SIDE (Script IDE) - a fully interactive and informative IDE designed to help noobs to build UMC-scripts (better than VS2010 + Python Tools)
- UMCSL - UMC's recompilation language, designed as a visual node-like scripting language following very simple ASM-command standards on a common interface.
older images and tests:
the language is still far from complete.

I do plan on building a few games using python, GLFW, and PyOpenGL/GLSL and the like.
I might only use blender for modelling... heh

as for why I built UMC when blender does practically the same thing...
have you ever tried building a script for blender?
it's a headache and a half.
UMC has it's own file interface with extendable binary support and pointer support. (no need for "import struct" and the likes of that)
and fully automates the import process with an interruptable state engine on a dual-sided buffer for untransformed and pretransformed models.

I'm speaking for UMC 3.0 which is still in development.
but it can boast a good 73% (at most) less code used when comparing a UMC3.0 script to a Blender26 script.

UMC3.0a (dev5) is currently in the works and soon to be released following 3.0 scripting standards to give users the ability to build scripts, while giving me time to perfect 3.0, which will be capable of far more.

yes, I know... I have a disgrace for a life. :P

btw, your BA posts are amazing!
nice work :)
I'm trying to get to be that good,
but my current understanding of materials is really lacking... heh

Thanks, no I think it is good to pursue your hobbies, as long as you enjoy them.

I like your UMC can you explain in more detail what it does? Does it convert .objs to .3ds and other formats. I'm intrigued.

I've dug out a screenshot of a game engine I was developing in blender 4.9 I think it was, many years ago. I'm trying to dig out the python scripts from my backup storage.

However, I've abandoned game dev in pursuit of my ultimate CMS in PHP.


have you ever tried building a script for blender?

Yes I have actually, thanks to blender my knowledge of python is medium to advanced. Basically, I extended on blender's existing game engine and programmed my own custom methods to move around an environment and shoot the buildings.

I didn't think it was that difficult can I ask specifically which part you say was a pain?

no I think it is good to pursue your hobbies, as long as you enjoy them.

indeed :)
yea, it's a pain, but it's fun as heck watching your creations do their work. XD

after I finish something on UMC's GUI, I usually sit back for 5 minutes and stare at it just thinking about how the work I've done is just amazing. :)

I've never taken a class for anything above Algebra 1 :P

I like your UMC can you explain in more detail what it does? Does it convert .objs to .3ds and other formats. I'm intrigued.

thanks :)
yea, it imports and exports any supported model format.
you can add support by writing a script for it.

it's currently just a model converter,
but as you can see I have plans for bigger support. :)
(Universal Game Editor is the final name, and is prefixed in UMC's function names)

if you want to import a binary model file, all you have to do is:

def ugeImportModel( FileType, UI_Commands ):

    FileMagic = string(4)()
    FileSize = bu32()

    # extendability:
    bf5 = bf(5) # 5-byte IEEE754 float

    data = bf5()

there are header functions used for specifying the supported file formats and internal script name.

the UI_Commands is for the script UI.
what you specify via the UI is returned there.

the script formatting is kept simple so anyone can work with the script w/o getting a major headache from all the complications... heh

and like I stated earlier, the process of passing the data into UMC's interface is almost exactly like using the OpenGL FFP.

the process is interruptable though so you don't have to worry about data order...
all you have to worry about is ID. ;)


though the facepoints must be defined before being referenced.

I didn't think it was that difficult can I ask specifically which part you say was a pain?

specifically the model I/O
there's no automation and everything relies on complex standards.
nothing is done for you.

in UMC, everything is done for you :)
including the transformations in 3.0
all you have to do is literally pass the data to the interface.

in blender, you have to convert it, and then define and update particular places in blender's interface

in UMC3.0a however, I built that w/o the dual-sided concept, so it only has a PT buffer. (models stored in T-pose)

in UMC3.0 though you can store models in:
PT: http://lh4.ggpht.com/-5Ub476AqRvc/UhO1Jkpq2_I/AAAAAAAAEtU/gqWD88v5O-g/s1152/MeleeNormals.PNG
UT: http://lh3.ggpht.com/-Ybh-jsPDr5I/TgHiEIRhrMI/AAAAAAAAHQ4/uNIl48q7lcY/s1024/Pichu_face_success.jpg
and all the conversions are applied in between on both buffers. ;)

one thing to note though
to keep UMC-scripting standards simple,
it's best to avoid using the import statement, though it's not banned. ;)

UMC basically sandboxes everything in it's own virtual environment.
that's how I was able to pull off the pointer support:

data = bf32()
dptr = deref(data)

data2 = ref( dptr, bf32 )
ref( dptr, bf32 ).set( 0.0 ) # data and data2 will print 0.0

and if you've looked through a few of my other posts, Gribouillis has been helping me out with this:

vert = struct( 12, 'x,y,z',
    x = bf32,
    y = bf32,
    z = bf32

vcount = bf16()*32
ugeSetVertArr( [ vert() for i in range(vcount) ] )

so yea, there's alot I've been doing. :P

idk why I put vcount = bf16()*32, I guess my autism was acting up there... :P
so vcount = bu32() is the correct thing. :P

I'm not even going to bother downvoting that one 9_9

@SaneleSHaBbA: did you even read the older pages.

Hey! He's entitled to his opinion. (Although his post would have been better if he had supplied some justification for his opinion.)

Hey! He's entitled to his opinion.

I have nothing against that :)

the only reason I want to downvote is because that same thing has been downvoted earlier multiple times and explained as to why.

I will give some credit to Java though, it's much better than .NET languages... heh
but C++ is still far more powerful, which isn't even the most powerful, though close to it.

this is of course excluding all forms of ASM as requested by the OP.

also, I finally have a legit definition for Power when it comes to programming languages:

Power = how much you're able to do, with as little resource requirements as possible, at max performance.

if anyone wants to correct me, feel free :)
(it's how problems get solved) :P

my definition of programming power is one reason I constantly bamf pythonic standards for inefficiency.

Tcll your definition of power is not quite accurate.

Let's be frank here, if you're a c++ developer you're going to plump for c++, java the same. The point is, I'm guessing most of the regulars including myself could probably call ourselves, medium to expert in any given subset of language, and looking at some of your work I have no doubt you're probably the same.

We all see the merits of the languages although some may be biased to one over the other.

It's difficult to see ASM or even binary as the most powerful, sure it is the most versatile but think about the number of steps you would need to write a GUI app in ASM.

Right tool for the job and all that jazz.

I'd like to throw shoes as a contender as I don't think it has been mentioned before in this thread.


I'm working my way through it at the minute. It's the perfect blend of cross platform, native look and feel, and miniscule and easy to read code. FTW

It's difficult to see ASM or even binary as the most powerful, sure it is the most versatile but think about the number of steps you would need to write a GUI app in ASM.

that's exactly what makes binary (ASM opcodes) so powerful.
you should ask yourself how little steps does this program need to take??

not even C can get it in exactly as little steps as needed, though, while it varries, it can be significantly less than C++
it really depends on which compiler you use for your specific program.

I'd like to throw shoes as a contender

lol nice use there XD

but I see it's ruby which isn't much different from python... heh
(it's syntax is very different, yes, but I'm talking about the byte-code which can be comparable)

really nice though :)

The point is, I'm guessing most of the regulars including myself could probably call ourselves, medium to expert in any given subset of language, and looking at some of your work I have no doubt you're probably the same.

I'm not putting myself above anyone else here other than the fact that I know more about my particular area of knowledge... heh
though I'm certain there are many others who know much more than me. ;)
we're all human after all. :P

C# And java is the most powerfull and secure language!

sorry, but both of those languages are FAR from secure.
(I can decompile both of them with 1 finger)

java is most powerful prog. lang,,,
every thing is fare in love,war and java

The thing is none is the best. they all have their various functions and different programmers use the ones that suite them most.

        Is this the place for that kind of question?
commented: Nope. +15

sorry, but both of those languages are FAR from secure.
(I can decompile both of them with 1 finger)

I think it would be good to define what you mean by 'secure'. If you mean disallowing third parties to reverse engineer or otherwise steal code, then you're correct that by default those languages are not secure. Though there are ways to obfuscate assemblies to make such actions more difficult. However, it's not impossible to reverse engineer any software, regardless of what language it was written in. The only difference is the effort required to do so.

If by 'secure' we're talking about the more conventional cracks and exploits then I'd point out 'security by obscurity' is the pinnacle of naiveté. That kind of 'secure' is deeper in that holes are avoided through robust code, use of secure libraries, and in the case of .NET or Java, a safe runtime environment. The language itself has little to do with being 'secure' unless its definition encourages unsafe practices. But that's a much more in-depth discussion than just saying "XYZ language is FAR from secure" because there are many variables involved.

Simply being able to easily decompile an assembly does not make the assembly insecure in terms of creating an exploit, because transparent code doesn't inherently contain exploitable holes. If that were the case, open source would be stupid.

@Ambrish_1: do you know how java works??
it's not as powerful as you say.

I happen to know C is the most powerful as Java was built with C++ which is almost as powerful as C.

aside from that fact, it's safe to say javaw.exe is an interpreter for .class files zipped in .jar files.

it's only more powerful than python which operates on compiled .pyc files.

I will say, while any .net language sucks, they can be slightly more powerful than Java, but not as powerful as C++

and any other language that can compile an exe can be anywhere between .net and ASM.

and any other interpreted language can be between java and past ruby

now for the love of god people, please stop saying java is the most powerful.

I prefer Python which is legitly powerful, especially with the C++ integration, but is far from the most powerful.

I personally love .NET just because of what it lets me do. There are tons of libraries for it, and thanks to Microsoft, more people are developing them

Also, anything I can think up, I can usually do. You also have the MSDN website that has documentation on pretty much everything, and there is the nice luxury of not having to worry about memory management. It does also allow for people to get nitty and gritty if they want, doing some more hardcode coding you might say. Plus let's not forget because it's managed you can do stuff like CLR in SQL Server, which I have come to learn is REALLY NICE. One other thing, Visual Studio is a nice IDE to work in

Of course I could list more, but that's a few off the top of my head.

However, that's a personal opinion, each language has it's pluses and minuses, and depending what it's needed for, one could better then the other.

I mean heck, SQL is pretty powerful. For such an old and simple language, it can do a lot!

commented: Agree. +15

I forgot to give .NET the fact it IS a garbage collector, yea that is one of the nice aspects of it when it comes to memory management, unlike Java or C...

the only problem with .NET is it's a microsoft product, leaving your freedom to program strictly towards specific windows standards.
I can't run .NET 4.5 and up for example (as I refuse to run anything above XP64 due to MS user control infringments)

so thus, java has more freedom, as does C++ and python.
the difference between java and python though is python has a built in garbage collector (no need for .NET, which is one reason MS hates python)

if you want the most powerful programming language in terms of usability, I'd have to say Cython (a build of python with pointer support) would be the way to go.

though cython is much more like C and has a much more complex syntax, which is one reason I havn't mentioned it until now (it's not one of my favs)

so yes, Python is missing pointer support (aside from CPython), but I glorify it's use over what it can do.
as for it's performance, it's only legitly powerful (without CPython), but that's not stopping me from following GL 4.0 standards with GLSL for building a current gen game with python's cross-platform compatibility.

but arguably with GL4.0, Java would be just as effective, and probably as good of a choice as Cython or CPython when you want to store your model data in your CPU-RAM (similar to how Minecraft works).

also, I know about IronPython, which was MS's attempt to kill Python that failed miserably...
I always thought of the .NET control implementations in python would be a good idea, but the syntax was soiled as you can't hardly do much with the language w/o specifying a single class.

sorry if my knowledge comes as a downer to you, but for my personal opinion,
I don't feel MS deserves the glory they're getting when all they're doing is shoving monkey poop down their user's throats and trying to control everything they do.

I don't hate MS anymore, I despise them for the filth they've been feeding their community since Windows Vista, and Windows 12 won't be any different.

Also, anything I can think up, I can usually do [in .NET].

Then I would say that your imagination or worldly experience in programming is seriously limited. Off the bat, it's important to realize that on a purely abstract level, all Turing complete languages can do anything. But on a practical level, there are two additional concerns: (1) how easily can I create a working solution, and (2) how good of a solution can I create. For me, a powerful language is one that scores very high on both fronts (like C++), and they are not mutually exclusive (contrary to some popular beliefs, especially within Java/C#/Python echo chambers).

High-level managed or interpreted languages aim to score high on concern (1) through a simplified programming paradigm (e.g., pure OOP), lots of run-time instrumentation of code and data (e.g., universal base-class, run-time reflection, duck typing, etc.), many layers of indirections, and very conservative (and safe) memory and threading models. The result of that is an insurmountable upper-limit on concern (2) (or "expressive power"), which actually makes some major disciplines of software engineering entirely pointless in those languages, not because they are not needed, but because they are not doable, due to techniques used that cannot be expressed in those languages. This typically explains why some people's imagination doesn't extend very far, because as a Java or .NET developer, these fields of software engineering would simply be outside your observable universe.

I cannot count the number of times I have encountered an extremely cool, efficient, clever solution to a problem, where the only languages I could think of that could allow this solution to be realized is C and C++ (and maybe D, I guess).

Here is one example that taps into a very real problem. There is a technique by which you use what's called a "space-filling curve" to generate a mapping between some uniform N-dimensional coordinates to a 1-dimensional coordinate that you can use as an index into an array. The main advantage of doing this is that this curve has the property of packing all neighboring elements (from all directions) very near to each other in the array itself. This becomes very important in things like large-scale physics simulations, i.e., the kind of stuff that runs on super-computers, but it's also important for smaller applications too (like 3D games, engineering analysis software, etc.), this is because it allows for very effective use of parallelism and cache memory, and it kind of just works like magic, with performance improvements of several orders of magnitude. I have done something similar in the past and the performance improvement was 100x to 300x (all just single-threaded, no parallelism was used) over a more naive memory layout (still a contiguous layout, which is already way better than what Java/C# give you).

And here is the point, doing something like this in a high-level language like Java or C# is a completely pointless exercise, because the language or platform (JVM, .NET) mandate that its one-size-fits-all strategy for managing memory layouts is all that you will ever get to use. This means that you are effectively stuck with the worst possible solution and cannot improve upon it, but they would claim that this "working" solution will be very easy to write... even if it's the worst you could do, and yet the best you can get in that language. I think these languages might have some ways to break out of that prison, but those mechanisms are like beaches in England: they're there, but who the heck wants to go there.

I'm perfectly OK with the idea of just quickly getting to a working solution and not worrying about implementing the "best" solution. But I cannot accept to call a language or platform "powerful" when the closest you can reasonably get to the "best" solution is ridiculously far from it.