It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
dtgreene: ...
I've finished the demo, this bit is the heavy code that makes binary data. Next is this bit which is made per map and is a nice little way to make a map. I didn't include encouonter data just to save time and effort. I'll also be saving this in my library for obvious purposes. Unfortunately, php is always subject to annoying changes so a better solution might be advisable for the future, but this is a simple way to look at it. Of course, it wouldn't be complete without the binary dump. 570 bytes and includes most of what you'd need.
avatar
kohlrak: My advice to you in this regard is C. You might get some convenience improvements with C++, but not much and you could run into issues with the "binary compatibility" bs coming up (might not, too, it just really depends on how OSes bother to handle the drama [i'm honestly hoping they straight up reject ISO's new standard whenever it comes out, 'cause it's absolutely not necessary to change this, and they're probably doing it so they can add all the internal junk that java, C#, and such languages have that tends to be more bloat than benefit]).
Fortunately, you can use
extern "C"
to define functions that use the C API, which I would not expect to change (and which is the easiest way to interact with other languages, since every mainstream language has some way to call C functions).


To put shader source code into the executable, I was able to make a cross-platform assembly source file that simply includes the shaders into its binary; this should work for other files as well, if they're to be statically compiled into the executable. (The .S file is platform independent because it doesn't include any actual assembly language instructions; there's no guarantee that it would work in a different toolchain (like MSVC instead of gcc or clang).
avatar
kohlrak: I'm actually trying to write an example for you in php right now. I thought about this before, but I had some problems getting LLVM on my tablet (android, ARM) to take this. It could be my lack of experience with LLVM/GCC. I'm curious what you did, 'cause if it's cleaner it'll probably be less hacky than my current stumbling block of trying to figure out how to make php output an int (as an int instead of string).
The code I use is literally this:

.data
.global pixel_shader
pixel_shader:
.incbin "pixel.glsl"
.word 0

.global vertex_shader
vertex_shader:
.incbin "vertex.glsl"
.word 0
It assembles (using GNU assembler) on both x86 and ARM, as it doesn't contain any actual assembly instructions. The ".word 0" at the end of each part is there to ensure that the strings are null terminated, so that C doesn't get confused.
Post edited June 03, 2021 by dtgreene
avatar
kohlrak: Given that, unless you're going full 3d i doubt you'll benefit from this, unless i'm misunderstanding where you're going, of course. You seem to have a preference for 2d, which doesn't benefit much (if at all) from shaders. You can get some benefits from throwing your textures onto 3d polies though for resizing if you need to do that, though, and maybe from transparency. I feel like you'll be able to benchmark the two ideas out when you get the graphics up and running.
avatar
dtgreene: The basic architecture of the graphics system looks something like this:
* The game uses tilemaps. There's a nametable (to use a term that NES homebrew developers use) that indicates which tiles go where.
* Two triangles are rendered that fill the screen, and the vertex shader is trivial.
* The fragment shader, however, is not; for each pixel, it determines which tile from the tileset should go in this spot (by looking up the nametable), and which pixel from the tile belongs on this pixel on screen. In particular, this shader does most of the work.
* In particular, there are no textures in the usual sense; it's all done in the fragment shader. (Also worth noting that this provides the option of having multiple layers.)
Seems like more work than necessary. Triangles is all normal, but trying to map pixels to only 2 triangles (why not use a quad at that rate?) seems like you'd be better off using a 2d library like SDL2 and blitting it in, since you're ultimately doing the same thing, anyway, if i'm reading what you're saying right. If i were to use the GPU, i'd make flat polies (either quads or triangles) and use traditional textures then find out some way to throw the transparency onto the GPU (i've never really looked into this outside of SDL and layered windows, but i imagine the GPU has some function for dealing with an alpha channel in a texture).
avatar
kohlrak: the indentation BS of python is probably the easiest example
I like Python's indentation, as it makes it easy to tell what code is part of which block, and it avoids bad indentation style. Also, indentation is easier to type than curly braces. (The only issues with this are when you try tpo mix spaces and tabs in the same program, which can cause strange error messages in some cases, and when trying to post code in places that eat whitespace.)

avatar
kohlrak: (why not use a quad at that rate?)
OpenGL ES doesn't have quads.

(Some platforms, like the Raspberry Pi, only support OpenGL ES (unless you use something like LLVMpipe).)
Post edited June 03, 2021 by dtgreene
avatar
kohlrak: My advice to you in this regard is C. You might get some convenience improvements with C++, but not much and you could run into issues with the "binary compatibility" bs coming up (might not, too, it just really depends on how OSes bother to handle the drama [i'm honestly hoping they straight up reject ISO's new standard whenever it comes out, 'cause it's absolutely not necessary to change this, and they're probably doing it so they can add all the internal junk that java, C#, and such languages have that tends to be more bloat than benefit]).
avatar
dtgreene: Fortunately, you can use
extern "C"
to define functions that use the C API, which I would not expect to change (and which is the easiest way to interact with other languages, since every mainstream language has some way to call C functions).
I don't know if that's going to remain the case, though. You'd think they wouldn't bother changing the mangling for C++ (as it is right now, it's just _Z[function name here][parameter suffices here]), but that's precisely what they want to do, and possibly change internal handling of certain types. They specifically cite this as a problem for "why we can't move forward," which indicates to me that this is indeed what they want to change.
avatar
kohlrak: I'm actually trying to write an example for you in php right now. I thought about this before, but I had some problems getting LLVM on my tablet (android, ARM) to take this. It could be my lack of experience with LLVM/GCC. I'm curious what you did, 'cause if it's cleaner it'll probably be less hacky than my current stumbling block of trying to figure out how to make php output an int (as an int instead of string).
The code I use is literally this:

.data
.global pixel_shader
pixel_shader:
.incbin "pixel.glsl"
.word 0

.global vertex_shader
vertex_shader:
.incbin "vertex.glsl"
.word 0
It assembles (using GNU assembler) on both x86 and ARM, as it doesn't contain any actual assembly instructions. The ".word 0" at the end of each part is there to ensure that the strings are null terminated, so that C doesn't get confused.
See, that's what I tried to do, but I had trouble actually exporting that.

Also, you have to be careful with .word, because a word in ARM is 4 bytes, not 2, where as it's 2 bytes in x86. For your null terminators, there, that's not a problem. And then there's also the ugly trouble of forcing it to export it since it tries to make a.out into an ELF instead, which isn't trivial on all systems. For example, on my one system, ld is not available, and objcopy wants a target. And, given it has a specific target machine, sometimes this doesn't start at offset 0, which can be a problem, since it can vary per architecture.

This isn't too bad if you're trying to embed it into your program, but if you're trying to use this to make a binary format for something external to the program itself, that's a problem.

avatar
dtgreene: I like Python's indentation, as it makes it easy to tell what code is part of which block, and it avoids bad indentation style. Also, indentation is easier to type than curly braces. (The only issues with this are when you try tpo mix spaces and tabs in the same program, which can cause strange error messages in some cases, and when trying to post code in places that eat whitespace.)
It can make it unreadable if you have a long object chain or want to spread a long function call out over several lines to comment on each parameter. Also, trivial things like a quick "fprintf(stderr, "File not found!\n"); exit(1);" can't be thrown on one line (which makes things seem far less verbose, as you'll see in the php code above). And, let's not get into the forever into egypt stuff with excessive nesting.
avatar
kohlrak: (why not use a quad at that rate?)
OpenGL ES doesn't have quads.

(Some platforms, like the Raspberry Pi, only support OpenGL ES (unless you use something like LLVMpipe).)
Oh yeah, forgot about that. Still, it's easier to make a pseudo-quad and map the image to that ("draw_quad(x, y, xsize, ysize, texture);") so that you only have to deal with it that way. They'll be quads as far as you're concerned, but you just have to write the code that makes a quad out of triangles that you then texture. And I did google the alpha channel thing and indeed gles supports alpha channels. And, of course, for performance you'll want to cache as much as you can. The less you touch the FPU, the better.
avatar
Not_you: I have tried Unreal Engine 5, and it was suprisingly bad compared to Unreal Engine 4 performance-wise. With the exact same environment (with a lot of foliage) I get 10-13 fps in UE5, while in UE4 I get 60+ fps. This was quite a dissapointment, but it is early access to be fair. I did notice however they added loads of features that made the engine easier to use. They kept Linux support so thats a plus.
avatar
WinterSnowfall: Keep in mind it's probably optimized only for latest gen GPUs - not sure what hardware you've tested it on. And in any case, it's just coming out. UE4 was a good iteration of UE to be honest, if we remember the horror shows that UE2 and UE3 turned out to be, so who knows about UE5... Only time will tell.
That is the wierd thing, I am using a rtx 2060 and a 4600h, so it's pretty new hardware. But I do agree, this is early access so hopefully it will be optimized. It might also just be because I used the Linux version which probably isn't tested that much.
So they still update Unreal 4 as well.

https://www.epicgames.com/site/en-US/news/unreal-engine-4-27-is-now-available