Mark Sibly

Forum Replies Created

Viewing 15 posts - 226 through 240 (of 1,431 total)
  • Author
    Posts
  • in reply to: Should a mojo3d app be hogging 100% CPU? #12623

    Mark Sibly
    Keymaster

    The problem does appear to be mojo3d-specific, as I say, 2D stuff doesn’t hog processor.

    It does here, you need to set MOJO_OPENGL_PROFILE to “compatibility” for it to happen though. mojo 2d defaults to es profile.

    Think I’ve found it anyway – there are a bunch of ‘forbidden’ ops that seem to cause bad things to happen, just fixing now.

    in reply to: long hex #12621

    Mark Sibly
    Keymaster

    Currently, you need to cast literals to long if they can’t fit in an int, eg:

    More convenient syntax for this will proabably happen, eg: $20000000L etc.

    Also, enums are currently 32 bit ints only so that may be a problem…I think attempting to support bitmasks via enums might have been a bit of a woopsie…

    in reply to: Memory Access violation #12613

    Mark Sibly
    Keymaster

    Monkey2 speed on desktop may or may not improve in futre. I am quite happy with the current speed so have no immediate optimization efforts planned. As for the future, I always like to tinker with things so who knows?

    I recommend sticking with monkey1 if you absolutely need the speed you were getting with that and you don’t care about anything else monkey2 has to offer. The cerberus guys seem to be successfully keeping it alive so it wont be going anywhere for a while yet.

    in reply to: Should a mojo3d app be hogging 100% CPU? #12612

    Mark Sibly
    Keymaster

    Just confirming the SetConfig/D3D workaround gives 0-1% CPU usage here.

    Yeah, but you’ll loose some performance, particularly with instancing.

    Having another poke around at this now. Was unable to test on bmx as SetGraphicsDriver GLGraphicsDriver() now just crashes, but in monkey1 I can run bouncyaliens for glw3 target (ie: no angle) with SetSwapInterval 1/SetUpdateRate 0 and it doesn’t seem to hog CPU. Can you verify?

    And what GPU do you have? driver version?

    I’m on Geforce GTX 970 + driver v388.31

    [edit]

    In fact, I can run mx2 sdltest with opengl in compatibility mode without cpu problems…wonder if it’s something to do with my new opengl wrapper?

    in reply to: NDK-Build fails on Android #12582

    Mark Sibly
    Keymaster

    I assume you mean 1.1.08? I just quickly tested current develop branch and it’s working so perhaps give that a go?

    Note, sometimes it can be a good idea to delete .build* and .products directories – this performs a full ‘clean’ rebuild (and should really be a build option in the IDE!).

    My ndk version is 16.1.4479499 according to android studio.

    in reply to: Emscripten: 'em++' is not recognized #12581

    Mark Sibly
    Keymaster

    Oops, sorry for leaving everyone hanging here, kind of got emsciptened out over the holidays.

    Anyway, wrote a quick blurb, please see below.

    Mark can you detail what you did to get Emscripten running on Mac.

    Just the usual, ie: ./emsdk install sdk-incoming-64bit

    BUT I also had to make cmake available from the command line. I assume there are a ton of ways to install cmake (homebrew etc..) but I already had cmake GUI installed and there’s a menu option that shows a number of ways to install it for command line use – I created links to binaries or something I think.

    But once emsdk could use cmake, everything just built with no further intervention. There’s also an extra build step the first time you run an app with emscripten but after THAT it should all work smoothly.

    Adventures in Emscripten!

    The idea with emscripten is to install the ’emsdk’ and then use something like:

    emsdk update
    emsdk install latest
    emsdk activate latest

    This is supposed to install the ‘latest’ precompiled sdk and activate it.

    Activating an SDK generally involves creating a config file in the users home dir and setting some env vars and PATHs.

    The SDK doesn’t always deal with adding it’s env vars and PATHs to your system permanently, so you may have to do this yourself manually. This is helped by emsdk showing you what env vars and PATHs you need add after you activate a particular SDK.

    However, it may be preferable to build emscripten from source, esp in the case of macos where all precompiled versions seem to be broken. This can be kind of scary but I’ve managed to do it on all 3 targets now – so here’s how I spent my ‘holiday’:

    Windows:

    The ‘latest’ version is currently 1.37.26.

    If you just install/activate this, it works but requires some extra command line parameters when linking apps – please see: bin/env_windows.txt

    I also managed to install sdk-incoming-64bit, which involves compiling from source which worked fine. I would recommend this if possible as its likely to include all the latest fixes etc. You probably need some kind of version of visual studio installed for this…?

    To complete setup, you may also need to tweak some vars and PATHs in the emscripten section of bin/env_windows.txt. Or you can use my computer->properties to set these env vars, which allows you to use em from the command line.

    MacOS:

    The ‘latest’ version is currently 1.37.27.

    I was not able to use this or any other precompiled version on macos due to an issue with ‘llvm-ar’. see: https://github.com/kripken/emscripten/issues/5418

    I eventually installed ‘sdk-incoming-64bit’ instead which involved compiling from source. However, actual install/activate process is the same.

    This was relatively easy although I needed to make cmake (a popular ‘build helper’ utility) available from the command line, which itself was easy via the cmake GUI. So you’ll need cmake too…

    To complete setup, you may also need to tweak some vars and PATHs in the emscripten section of bin/env_macos.txt. Or if you know more about this stuff than me, you may be able to edit ~/.profile or whatever it is.

    Linux:

    The ‘latest’ version is currently 1.37.27.

    I was not able to use this due to a bunch of what looked like ‘libc’ linking errors. I possibly need a different version of gcc.

    So again, I ended up installing ‘sdk-incoming-64bit’, which was relatively easy although I also needed to upgrade cmake to the latest version first.

    Which in turned involved compiling cmake from source too! This itself was relatively easy as long as you read the readme file in the cmake source package.

    After cmake version was correct, sdk-incoming-64bit built OK although it took 6 hours-ish and crashed a few times in the process (pretty sure that was my screensaver though). Still, always resumed fine after a crash…

    To complete setup, you may also need to tweak some vars and PATHs in the emscripten section of bin/env_linux.txt. Or edit .profile etc…

     

    in reply to: Should a mojo3d app be hogging 100% CPU? #12578

    Mark Sibly
    Keymaster

    Posted a report in the nvidia developer forums so we now have top people working on it – TOP PEOPLE!

    in reply to: Should a mojo3d app be hogging 100% CPU? #12575

    Mark Sibly
    Keymaster

    Actually, after a bit more research I’m not convinvced this is a non-issue – I installed a HWMonitor which definitely measures high power usage when opengl drivers are used vs direct3d. Will keep investigating but it may just come down to crappy drivers. You can always force d3d drivers on windows using:

    But app will run a little slower in some instances.

    Also, it appears that using ‘timer’ timing and SwapInterval 0 also fixes it…

    Sleep( 1 ) ‘worked’ here the first time I tried it because Sleep sleeps for *seconds*, so yes, cpu usage went down!

    in reply to: Should a mojo3d app be hogging 100% CPU? #12574

    Mark Sibly
    Keymaster

    This is a weird one.

    It seems to be caused by using ‘real’ opengl drivers, eg: sticking this at the top of ‘Main’ in any mojo app should cause the problem too (tried: PromptInvasion):

    The top answer here seems to suggest it may just be a problem with how windows measures cpu usage and how opengl drivers work:

    https://stackoverflow.com/questions/5829881/avoid-waiting-on-swapbuffers

    SwapBuffers is not busy waiting, it just blocks your thread in the driver context, which makes Windows calculating the CPU usage wrongly: Windows calculates the CPU usage by determining how much CPU time the idle process gets + how much time programs don’t spend in driver context.

    I’m a bit out of my depth here, but I read this as saying windows doesn’t count time spent in ‘driver context’ (some kind of privileged mode used by drivers where they can poke HW regs etc I assume) when calculating cpu usage, even if the app is blocked and ‘switchable’ while in driver context (so not really busy).

    I did try Sleep( 0 ) and Sleep( 1 ) after SwapBuffers as suggested above. Sleep( 0 ) had no effect, but Sleep( 1 ) did appear to fix the cpu usage issue – however, it also seemed to disable all user input for reasons I can’t begin to imagine.

    I’m going to leave this for now – if anyonle wants to look into it further please do and let me know how you get along, but IMO it appears the above post is correct and that the driver is in fact blocked and NOT consuming all that CPU (thanks to Sleep(1) test), it’s just that Windows can’t measure it.

    in reply to: [Solved] 2D Lighting on Linux broken (v1.1.09 dev branch). #12561

    Mark Sibly
    Keymaster

    Hi,

    Just checked, and there was a little ‘issue’ with shadows in develop (and in v1.1.08 actually) where the light was always at a  fixed point when drawing shadows – I must’ve been testing something. Fixed now in develop branch and apart from that all appears to be working as usual so I’m not sure what’s up with your version. Have you rebuilt everything, updated drivers etc? My linux setup has a choice of open source/proprietry drivers perhaps try changing this?

    Trying v1.1.09rc1 next. Which branch should I be working with these days?

    For what your’re doing, the latest itch.io ‘releases’ might be the way to go. Release candidates should be generally OK too.

    in reply to: Compiling problem report #12532

    Mark Sibly
    Keymaster

    I can only assume he/she is somehow using a ‘wrong’ version of mingw or something…not much I can do without more info.

    in reply to: Fog improvement suggestion #12531

    Mark Sibly
    Keymaster

    Good point – can you make an issue please?

    in reply to: Glitchy 3D #12530

    Mark Sibly
    Keymaster

    All working fine here although very green…should have a slightly better template for mojo3d in there…

    I have an nvidia gfx card though. Can anyone confirm this happens on nvidia too, or is it ATI only?

    Can someone try canvas.Flush() and/or opengl.glFlush() before/after _scene.Render()?

    Reszie behavior is the same here on both develop and 1.1.08 releases, but this is all getting messed with at the moment so don’t worry about it (yet).

    in reply to: How does AddVertices work? #12521

    Mark Sibly
    Keymaster

    You need to AddTriangles too – all meshes are currently indexed so you will need to add the triangle indices for any newly added vertices.

    In this though, simply going  “model.Mesh.AddTriangles (mesh.GetIndices() )” wont work because you’ll just be adding the same [0,1,2], you’ll need to be trickier.

    [edit]I just added AddMesh and it appears to work, ie: instead of addVertices use: model.Mesh.AddMesh( mesh ) which will add vertices and ‘relocated’ indices – draws a quad here anyway.

    You’ll have to be a bit cleverer with the model.Materials array too – you’ll need to ‘grow’ this for each added material, in fact maybe it should just be a stack?

    Anyway, here’re my tweaks:

    The future of Model.Material is still up in the air, I may yet rip it out and just leave Model.Materials.

    UpdateNormals/Tangents aren’t really needed as AddMesh doesn’t merge vertices so it wont affect existing normals/tangents. Eventually though, there’ll probably be an Optimize() method that will merge vertices and you’ll need to update normals/tangents after that.

    Note the Mesh class is really designed for ‘offline’ stuff. In realtime, it’ll be much faster to directly use VertexBuffers and IndexBuffers and to override OnRender.

    in reply to: Emscripten: 'em++' is not recognized #12488

    Mark Sibly
    Keymaster

    I think for wasm you need 1.37.22 or later – might have to google that, but if it can’t find the BINARYEN option wasm definitely isn’t supported.

    If possible, I would recommend trying to get sdk-incoming-64bit installed. The incoming releases seem to have fixed a lot of issues/bugs – I hope they release them in precompiled form ASAP.

    This may be a little trickier because you need to build it. However, I’ve managed to do this OK on both windows and macos so far, will try linux today. The process is pratically the same, ie: ‘./emsdk install sdk-incoming-64bit’ only it takes a lot longer to install because it has to build it all first!

Viewing 15 posts - 226 through 240 (of 1,431 total)