I am running centos 7 and developer is writing something that takes data and processes it. It is a lot of data, filling up 128 GB every few hours.
The set up I have to work with is TBs of storage exported as Samba from Windows machine, 128GB local disk, and 48 GB memory. Application is more core intensive than Memory intensive.
Developers do not have time to look at and recode this project, so I was hoping there was some kind of tiered pseudo filesystem that I can use that will write first to tmpfs, then to local disk, then to samba as space fills up.
Does this exist or is there an easier way to present this to them without scripts moving crap around?
[link][4 comments]