mysqldump - how to repack logfiles with git efficiently -
i have several files on server grow permanently (logs, db-dumps, ..) , want make full backup history in repository.
simple diffs between 2 states pretty small, files too, somehow git repository becomes large, 100 gb now.
i found repack command, runs hours , don't see how far got already. can see somehow how percent got processed?
also in beginning there repack problems, have found following parameters, not sure if values use case:
repack -a -f -d --window-memory=400m --max-pack-size=400m --depth=100 --window=100
is possible configure git to:
.. work each file individually? read sorts files size , type , if close, diff on them, independent of filename. can disabled , check same file? never renamed. thought making 1 git every file, lot of repositories. usual?
.. compare each new file last 2-3 versions of same file? or last one? since new content appendend, doesn't make sense compare earlier versions. db-dumps data change in middle, does. files not large, prefer not splitting them.
.. define amout of commits before files stored again instead of diffs? i-frames , p-frames in mpeg?
are params use good? there others should use?
final packed size not important efficiency. if creates 2 gb instead of 1 gb fine (or max-pack-size limiting 400mb?!?), should not run 10 hours if can in 10 minutes.
every appreciated!
Comments
Post a Comment