LOL, I was even able to make #GPUSPH buildable and runnable on #GNU #Hurd (just need to fence off a couple of unavailable headers and functions). Not sure how useful an #HPC program is on an OS that doesn't even fully support 64-bit processors though. I confess to having done this purely for #nerd credits.
The reason I decided to do this isn't so much to expand OS support in #GPUSPH (although that's always a nice bonus), but as a #learning opportunity for myself. I may do a write-up on this some time in the future.
(FWIW, so far the #FreeBSD experience “feels” way more friendly than that of other #BSD OSes.)
In the end I have reinstalled the #FreeBSD #VM and created new ones for #NetBSD and #OpenBSD. I was able to confirm that thread pinning works on #FreeBSD (so there was something wrong with the previous VM), added support for the #NetBSD way of doing it (even if it's not allowed to ordinary users, normally). and implemented the multitude forms of the “set thread name” API in #BSD.
Well, today I got the good news from someone running #FreeBSD on bare metal that #GPUSPH does _not_ lock up for themSo maybe that lock-up I'm experiencing _is_ related to running in a VM (or some other condition; CPU model?).
Of course now I want to try adding support for other #BSD OSes (at least #NetBSD and #OpenBSD). But I'm too lazy to set up VMs for them too. Does #askfediverse have recommendations on how I could do it instead?
Adding #FreeBSD support to #GPUSPH has been an interesting exercise, and a low-effort one too (possibly because of our existing support for #macOS and #Android, that helped weed out a number of #GNU-isms).
We do still have an issue with the programing locking up HARD (locks up any pre- or post-attached debugger, and even prevents a clean shutdown) when using thread affinity or #OpenMP —I wonder if it's an issue with it being in a VM or something else.
Although the focus of the paper is on #SmoothedParticleHydrodynamics, and the simulation of viscous fluids, the key finding is of more general interest: the numerical stability of #BiCGSTAB can be improved by rewriting the standard formulation https://en.wikipedia.org/wiki/Biconjugate_gradient_stabilized_method to avoid catastrophic cancellations in the computation of some important coefficients. Anyone interested in solving large linear systems would benefit from adopting the proposed alternative form of the method.
I'm still preparing my post series on #GPGPU, but in the mean time you can read some about in our most recent published paper:
“A numerically robust, parallel-friendly variant of #BiCGSTAB for the semi-implicit integration of the viscous term in #SmoothedParticleHydrodynamics”, freely accessible for the next 50 days from this link:
Sorry for being away for more than a month! The #SPHERIC2022 workshop has been a wonderful experience, but intense and energy-draining, with barely any time to recover afterwards.
I'm still alive (woohoo!) and losing my mind as I rush to finish all the things that are still needed before June 6 ...
One thing for sure: after this I will have a redoubled appreciation for the people organizing events and the amount of work that goes into make sure that participants have the smoothest experience possible.
I started writing the first #SpaceTalkTuesday thread about planetary habitability, but quickly realized you all need some background on how we *find* planets first!
So sorry to everyone who voted for habitability, but we’re doing HOW TO FIND AN EXOPLANET 🔭 today!
I promise this will make the habitability thread next week make even more sense (1/)
For a more hands-on learning experience, the upcoming 16th #SPHERIC International Workshop <https://www.spheric2022.it> offers a #TrainingDay fully dedicated to learning the basics of #SmoothedParticleHydrodynamics, from the theory to practical examples with a #FreeSoftware #OpenSource implementation.
The community of researchers and industrial users of #SmoothedParticleHydrodynamics is represented by #SPHERIC, an #ERCOFTAC #SIG with the objective of fostering the development of the method and its adoption <https://spheric-sph.org>.
#SPHERIC defined 5 #GrandChallenges for #SPH:
GC1: #convergence, #consistency and #stability
GC4: Coupling to other models
GC5: Applicability to #industry
#SmoothedParticleHydrodynamics was originally developed for #astrophysics (modelling star formation), but has expanded to #NavalArchitecture, #OceanEngineering, #waterworks, #aeronautics, #geology, #medicine, just to name a few.
It has caught the attention of several industries, with applications ranging from the design of engines, tires and windshield wipers to the development of wave energy converters, ship-locks, fish-passes and dam spillways.
These two properties give #SPH some advantages over more traditional methods (finite differences, finite element, finite volume), such as automatic mass conservation and natural (often implicit) handling of interfaces, large deformations or fragmentation.
In addition, the standard weakly-compressible formulation is embarrassingly parallel in nature, making it fit for implementation on high-performance parallel computing hardware. #GPGPU in particular has been a boon for SPH.
Lagrangian: the computational elements (“particles”) move following (a discretized version of) the equations of motion (typically, the Euler or Navier–Stokes equations in the #CFD case).
Meshless: the computational elements are _disconnected_. There is no reference grid or mesh ‘connecting’ the particles: particles interact when they are within some prescribed (possibly non-constant) influence radius.
Why should we talk about it? Because it's relatively less known than other numerical methods, possibly undeservingly so, and because I love it.