== Some thoughts on tradeoffs between storage models To stereotype, there are two models of operating your storage: [[long term storage PainlessLongtermStorage]], where you evolve your storage over time without the users noticing, and [[run it into the ground storage LongtermStorageArrogance]], where you just buy the best thing of the moment and replace it when it's too outdated. So, what are the tradeoffs between the storage models, or in other words when should you pick one or the other? My thoughts so far on this is that there are a number of issues: * what sort of downtimes can you afford when you upgrade your storage? The run it into the ground model generally requires that you can live with user-visible changes and potentially significant downtime to move data from one storage technology to another. * what sort of expansion do you need; particularly, do you need to expanding existing things, or can you get away with adding new things? The run it into the ground model is often not so great at expanding existing storage; once you max out a unit, that's generally it. The long term model generally has more ability to expand existing storage on existing servers. (Better tools for transparent data migration would help a lot here.) * how much work it is to add a new server to your environment, because significant expansion in a run it into the ground model means adding more servers even if you don't migrate existing data. My feeling is that long term storage trades off extra expense for less user pain in the future; however, a lot depends on how your data is organized and how it expands. (To put it one way, do people's home directories grow endlessly, or do they start working with bigger and bigger datasets over time? I suspect that most environments have a mix of both.)