More renaming
This commit is contained in:
60
README.md
60
README.md
@@ -21,7 +21,7 @@ which stores how far through the `state_groups` table the compressor has scanned
|
|||||||
The tool can be run manually when you are running out of space, or be scheduled to run
|
The tool can be run manually when you are running out of space, or be scheduled to run
|
||||||
periodically.
|
periodically.
|
||||||
|
|
||||||
## Building
|
## Building
|
||||||
|
|
||||||
This tool requires `cargo` to be installed. See https://www.rust-lang.org/tools/install
|
This tool requires `cargo` to be installed. See https://www.rust-lang.org/tools/install
|
||||||
for instructions on how to do this.
|
for instructions on how to do this.
|
||||||
@@ -30,7 +30,7 @@ To build `synapse_auto_compressor`, clone this repository and navigate to the
|
|||||||
`synapse_auto_compressor/` subdirectory. Then execute `cargo build`.
|
`synapse_auto_compressor/` subdirectory. Then execute `cargo build`.
|
||||||
|
|
||||||
This will create an executable and store it in
|
This will create an executable and store it in
|
||||||
`synapse_auto_compressor/target/debug/auto_compressor`.
|
`synapse_auto_compressor/target/debug/synapse_auto_compressor`.
|
||||||
|
|
||||||
## Example usage
|
## Example usage
|
||||||
```
|
```
|
||||||
@@ -38,25 +38,25 @@ $ synapse_auto_compressor -p postgresql://user:pass@localhost/synapse -c 500 -n
|
|||||||
```
|
```
|
||||||
## Running Options
|
## Running Options
|
||||||
|
|
||||||
- -p [POSTGRES_LOCATION] **Required**
|
- -p [POSTGRES_LOCATION] **Required**
|
||||||
The configuration for connecting to the Postgres database. This should be of the form
|
The configuration for connecting to the Postgres database. This should be of the form
|
||||||
`"postgresql://username:password@mydomain.com/database"` or a key-value pair
|
`"postgresql://username:password@mydomain.com/database"` or a key-value pair
|
||||||
string: `"user=username password=password dbname=database host=mydomain.com"`
|
string: `"user=username password=password dbname=database host=mydomain.com"`
|
||||||
See https://docs.rs/tokio-postgres/0.7.2/tokio_postgres/config/struct.Config.html
|
See https://docs.rs/tokio-postgres/0.7.2/tokio_postgres/config/struct.Config.html
|
||||||
for the full details.
|
for the full details.
|
||||||
|
|
||||||
- -c [CHUNK_SIZE] **Required**
|
- -c [CHUNK_SIZE] **Required**
|
||||||
The number of state groups to work on at once. All of the entries from state_groups_state are
|
The number of state groups to work on at once. All of the entries from state_groups_state are
|
||||||
requested from the database for state groups that are worked on. Therefore small chunk
|
requested from the database for state groups that are worked on. Therefore small chunk
|
||||||
sizes may be needed on machines with low memory. Note: if the compressor fails to find
|
sizes may be needed on machines with low memory. Note: if the compressor fails to find
|
||||||
space savings on the chunk as a whole (which may well happen in rooms with lots of backfill
|
space savings on the chunk as a whole (which may well happen in rooms with lots of backfill
|
||||||
in) then the entire chunk is skipped.
|
in) then the entire chunk is skipped.
|
||||||
|
|
||||||
- -n [CHUNKS_TO_COMPRESS] **Required**
|
- -n [CHUNKS_TO_COMPRESS] **Required**
|
||||||
*CHUNKS_TO_COMPRESS* chunks of size *CHUNK_SIZE* will be compressed. The higher this
|
*CHUNKS_TO_COMPRESS* chunks of size *CHUNK_SIZE* will be compressed. The higher this
|
||||||
number is set to, the longer the compressor will run for.
|
number is set to, the longer the compressor will run for.
|
||||||
|
|
||||||
- -d [LEVELS]
|
- -d [LEVELS]
|
||||||
Sizes of each new level in the compression algorithm, as a comma-separated list.
|
Sizes of each new level in the compression algorithm, as a comma-separated list.
|
||||||
The first entry in the list is for the lowest, most granular level, with each
|
The first entry in the list is for the lowest, most granular level, with each
|
||||||
subsequent entry being for the next highest level. The number of entries in the
|
subsequent entry being for the next highest level. The number of entries in the
|
||||||
@@ -68,14 +68,14 @@ given set of state. [defaults to "100,50,25"]
|
|||||||
## Scheduling the compressor
|
## Scheduling the compressor
|
||||||
The automatic tool may put some strain on the database, so it might be best to schedule
|
The automatic tool may put some strain on the database, so it might be best to schedule
|
||||||
it to run at a quiet time for the server. This could be done by creating an executable
|
it to run at a quiet time for the server. This could be done by creating an executable
|
||||||
script and scheduling it with something like
|
script and scheduling it with something like
|
||||||
[cron](https://www.man7.org/linux/man-pages/man1/crontab.1.html).
|
[cron](https://www.man7.org/linux/man-pages/man1/crontab.1.html).
|
||||||
|
|
||||||
# Manual tool: synapse_compress_state
|
# Manual tool: synapse_compress_state
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
A manual tool that reads in the rows from `state_groups_state` and `state_group_edges`
|
A manual tool that reads in the rows from `state_groups_state` and `state_group_edges`
|
||||||
tables for a specified room and calculates the changes that could be made that
|
tables for a specified room and calculates the changes that could be made that
|
||||||
(hopefully) will significantly reduce the number of rows.
|
(hopefully) will significantly reduce the number of rows.
|
||||||
|
|
||||||
@@ -86,7 +86,7 @@ that if `-t` is given then each change to a particular state group is wrapped
|
|||||||
in a transaction). If you do wish to send the changes to the database automatically
|
in a transaction). If you do wish to send the changes to the database automatically
|
||||||
then the `-c` flag can be set.
|
then the `-c` flag can be set.
|
||||||
|
|
||||||
The SQL generated is safe to apply against the database with Synapse running.
|
The SQL generated is safe to apply against the database with Synapse running.
|
||||||
This is because the `state_groups` and `state_groups_state` tables are append-only:
|
This is because the `state_groups` and `state_groups_state` tables are append-only:
|
||||||
once written to the database, they are never modified. There is therefore no danger
|
once written to the database, they are never modified. There is therefore no danger
|
||||||
of a modification racing against a running Synapse. Further, this script makes its
|
of a modification racing against a running Synapse. Further, this script makes its
|
||||||
@@ -96,7 +96,7 @@ from any of the queries that Synapse performs.
|
|||||||
The tool will also ensure that the generated state deltas do give the same state
|
The tool will also ensure that the generated state deltas do give the same state
|
||||||
as the existing state deltas before generating any SQL.
|
as the existing state deltas before generating any SQL.
|
||||||
|
|
||||||
## Building
|
## Building
|
||||||
|
|
||||||
This tool requires `cargo` to be installed. See https://www.rust-lang.org/tools/install
|
This tool requires `cargo` to be installed. See https://www.rust-lang.org/tools/install
|
||||||
for instructions on how to do this.
|
for instructions on how to do this.
|
||||||
@@ -126,54 +126,54 @@ $ psql synapse < out.data
|
|||||||
|
|
||||||
## Running Options
|
## Running Options
|
||||||
|
|
||||||
- -p [POSTGRES_LOCATION] **Required**
|
- -p [POSTGRES_LOCATION] **Required**
|
||||||
The configuration for connecting to the Postgres database. This should be of the form
|
The configuration for connecting to the Postgres database. This should be of the form
|
||||||
`"postgresql://username:password@mydomain.com/database"` or a key-value pair
|
`"postgresql://username:password@mydomain.com/database"` or a key-value pair
|
||||||
string: `"user=username password=password dbname=database host=mydomain.com"`
|
string: `"user=username password=password dbname=database host=mydomain.com"`
|
||||||
See https://docs.rs/tokio-postgres/0.7.2/tokio_postgres/config/struct.Config.html
|
See https://docs.rs/tokio-postgres/0.7.2/tokio_postgres/config/struct.Config.html
|
||||||
for the full details.
|
for the full details.
|
||||||
|
|
||||||
- -r [ROOM_ID] **Required**
|
- -r [ROOM_ID] **Required**
|
||||||
The room to process (this is the value found in the `rooms` table of the database
|
The room to process (this is the value found in the `rooms` table of the database
|
||||||
not the common name for the room - it should look like: "!wOlkWNmgkAZFxbTaqj:matrix.org".
|
not the common name for the room - it should look like: "!wOlkWNmgkAZFxbTaqj:matrix.org".
|
||||||
|
|
||||||
- -b [MIN_STATE_GROUP]
|
- -b [MIN_STATE_GROUP]
|
||||||
The state group to start processing from (non-inclusive).
|
The state group to start processing from (non-inclusive).
|
||||||
|
|
||||||
- -n [GROUPS_TO_COMPRESS]
|
- -n [GROUPS_TO_COMPRESS]
|
||||||
How many groups to load into memory to compress (starting
|
How many groups to load into memory to compress (starting
|
||||||
from the 1st group in the room or the group specified by -b).
|
from the 1st group in the room or the group specified by -b).
|
||||||
|
|
||||||
- -l [LEVELS]
|
- -l [LEVELS]
|
||||||
Sizes of each new level in the compression algorithm, as a comma-separated list.
|
Sizes of each new level in the compression algorithm, as a comma-separated list.
|
||||||
The first entry in the list is for the lowest, most granular level, with each
|
The first entry in the list is for the lowest, most granular level, with each
|
||||||
subsequent entry being for the next highest level. The number of entries in the
|
subsequent entry being for the next highest level. The number of entries in the
|
||||||
list determines the number of levels that will be used. The sum of the sizes of
|
list determines the number of levels that will be used. The sum of the sizes of
|
||||||
the levels affects the performance of fetching the state from the database, as the
|
the levels affects the performance of fetching the state from the database, as the
|
||||||
sum of the sizes is the upper bound on the number of iterations needed to fetch a
|
sum of the sizes is the upper bound on the number of iterations needed to fetch a
|
||||||
given set of state. [defaults to "100,50,25"]
|
given set of state. [defaults to "100,50,25"]
|
||||||
|
|
||||||
- -m [COUNT]
|
- -m [COUNT]
|
||||||
If the compressor cannot save this many rows from the database then it will stop early.
|
If the compressor cannot save this many rows from the database then it will stop early.
|
||||||
|
|
||||||
- -s [MAX_STATE_GROUP]
|
- -s [MAX_STATE_GROUP]
|
||||||
If a max_state_group is specified then only state groups with id's lower than this
|
If a max_state_group is specified then only state groups with id's lower than this
|
||||||
number can be compressed.
|
number can be compressed.
|
||||||
|
|
||||||
- -o [FILE]
|
- -o [FILE]
|
||||||
File to output the SQL transactions to (for later running on the database).
|
File to output the SQL transactions to (for later running on the database).
|
||||||
|
|
||||||
- -t
|
- -t
|
||||||
If this flag is set then each change to a particular state group is wrapped in a
|
If this flag is set then each change to a particular state group is wrapped in a
|
||||||
transaction. This should be done if you wish to apply the changes while synapse is
|
transaction. This should be done if you wish to apply the changes while synapse is
|
||||||
still running.
|
still running.
|
||||||
|
|
||||||
- -c
|
- -c
|
||||||
If this flag is set then the changes the compressor makes will be committed to the
|
If this flag is set then the changes the compressor makes will be committed to the
|
||||||
database. This should be safe to use while synapse is running as it wraps the changes
|
database. This should be safe to use while synapse is running as it wraps the changes
|
||||||
to every state group in it's own transaction (as if the transaction flag was set).
|
to every state group in it's own transaction (as if the transaction flag was set).
|
||||||
|
|
||||||
- -g
|
- -g
|
||||||
If this flag is set then output the node and edge information for the state_group
|
If this flag is set then output the node and edge information for the state_group
|
||||||
directed graph built up from the predecessor state_group links. These can be looked
|
directed graph built up from the predecessor state_group links. These can be looked
|
||||||
at in something like Gephi (https://gephi.org).
|
at in something like Gephi (https://gephi.org).
|
||||||
@@ -197,7 +197,7 @@ $ docker-compose down
|
|||||||
# Using the synapse_compress_state library
|
# Using the synapse_compress_state library
|
||||||
|
|
||||||
If you want to use the compressor in another project, it is recomended that you
|
If you want to use the compressor in another project, it is recomended that you
|
||||||
use jemalloc `https://github.com/gnzlbg/jemallocator`.
|
use jemalloc `https://github.com/gnzlbg/jemallocator`.
|
||||||
|
|
||||||
To prevent the progress bars from being shown, use the `no-progress-bars` feature.
|
To prevent the progress bars from being shown, use the `no-progress-bars` feature.
|
||||||
(See `synapse_auto_compressor/Cargo.toml` for an example)
|
(See `synapse_auto_compressor/Cargo.toml` for an example)
|
||||||
@@ -217,14 +217,14 @@ from the machine where Postgres is running, the url will be the following:
|
|||||||
### From remote machine
|
### From remote machine
|
||||||
|
|
||||||
If you wish to connect from a different machine, you'll need to edit your Postgres settings to allow
|
If you wish to connect from a different machine, you'll need to edit your Postgres settings to allow
|
||||||
remote connections. This requires updating the
|
remote connections. This requires updating the
|
||||||
[`pg_hba.conf`](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) and the `listen_addresses`
|
[`pg_hba.conf`](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) and the `listen_addresses`
|
||||||
setting in [`postgresql.conf`](https://www.postgresql.org/docs/current/runtime-config-connection.html)
|
setting in [`postgresql.conf`](https://www.postgresql.org/docs/current/runtime-config-connection.html)
|
||||||
|
|
||||||
## Printing debugging logs
|
## Printing debugging logs
|
||||||
|
|
||||||
The amount of output the tools produce can be altered by setting the RUST_LOG
|
The amount of output the tools produce can be altered by setting the RUST_LOG
|
||||||
environment variable to something.
|
environment variable to something.
|
||||||
|
|
||||||
To get more logs when running the synapse_auto_compressor tool try the following:
|
To get more logs when running the synapse_auto_compressor tool try the following:
|
||||||
|
|
||||||
@@ -232,14 +232,14 @@ To get more logs when running the synapse_auto_compressor tool try the following
|
|||||||
$ RUST_LOG=debug synapse_auto_compressor -p postgresql://user:pass@localhost/synapse -c 50 -n 100
|
$ RUST_LOG=debug synapse_auto_compressor -p postgresql://user:pass@localhost/synapse -c 50 -n 100
|
||||||
```
|
```
|
||||||
|
|
||||||
If you want to suppress all the debugging info you are getting from the
|
If you want to suppress all the debugging info you are getting from the
|
||||||
Postgres client then try:
|
Postgres client then try:
|
||||||
|
|
||||||
```
|
```
|
||||||
RUST_LOG=synapse_auto_compressor=debug,synapse_compress_state=debug synapse_auto_compressor [etc.]
|
RUST_LOG=synapse_auto_compressor=debug,synapse_compress_state=debug synapse_auto_compressor [etc.]
|
||||||
```
|
```
|
||||||
|
|
||||||
This will only print the debugging information from those two packages. For more info see
|
This will only print the debugging information from those two packages. For more info see
|
||||||
https://docs.rs/env_logger/0.9.0/env_logger/.
|
https://docs.rs/env_logger/0.9.0/env_logger/.
|
||||||
|
|
||||||
## Building difficulties
|
## Building difficulties
|
||||||
@@ -249,7 +249,7 @@ and building on Linux will also require `pkg-config`
|
|||||||
|
|
||||||
This can be done on Ubuntu with: `$ apt-get install libssl-dev pkg-config`
|
This can be done on Ubuntu with: `$ apt-get install libssl-dev pkg-config`
|
||||||
|
|
||||||
Note that building requires quite a lot of memory and out-of-memory errors might not be
|
Note that building requires quite a lot of memory and out-of-memory errors might not be
|
||||||
obvious. It's recomended you only build these tools on machines with at least 2GB of RAM.
|
obvious. It's recomended you only build these tools on machines with at least 2GB of RAM.
|
||||||
|
|
||||||
## Auto Compressor skips chunks when running on already compressed room
|
## Auto Compressor skips chunks when running on already compressed room
|
||||||
@@ -267,7 +267,7 @@ be a large problem.
|
|||||||
## Compressor is trying to increase the number of rows
|
## Compressor is trying to increase the number of rows
|
||||||
|
|
||||||
Backfilling can lead to issues with compression. The synapse_auto_compressor will
|
Backfilling can lead to issues with compression. The synapse_auto_compressor will
|
||||||
skip chunks it can't reduce the size of and so this should help jump over the backfilled
|
skip chunks it can't reduce the size of and so this should help jump over the backfilled
|
||||||
state_groups. Lots of state resolution might also impact the ability to use the compressor.
|
state_groups. Lots of state resolution might also impact the ability to use the compressor.
|
||||||
|
|
||||||
To examine the state_group hierarchy run the manual tool on a room with the `-g` option
|
To examine the state_group hierarchy run the manual tool on a room with the `-g` option
|
||||||
|
|||||||
@@ -179,7 +179,7 @@ fn collapse_state_with_database(state_group: i64) -> StateMap<Atom> {
|
|||||||
// the predecessor (so have split this into a different query)
|
// the predecessor (so have split this into a different query)
|
||||||
let query_pred = r#"
|
let query_pred = r#"
|
||||||
SELECT prev_state_group
|
SELECT prev_state_group
|
||||||
FROM state_group_edges
|
FROM state_group_edges
|
||||||
WHERE state_group = $1
|
WHERE state_group = $1
|
||||||
"#;
|
"#;
|
||||||
|
|
||||||
@@ -243,7 +243,7 @@ pub fn database_structure_matches_map(state_group_map: &BTreeMap<i64, StateGroup
|
|||||||
// the predecessor (so have split this into a different query)
|
// the predecessor (so have split this into a different query)
|
||||||
let query_pred = r#"
|
let query_pred = r#"
|
||||||
SELECT prev_state_group
|
SELECT prev_state_group
|
||||||
FROM state_group_edges
|
FROM state_group_edges
|
||||||
WHERE state_group = $1
|
WHERE state_group = $1
|
||||||
"#;
|
"#;
|
||||||
|
|
||||||
@@ -356,7 +356,7 @@ fn functions_are_self_consistent() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn setup_logger() {
|
pub fn setup_logger() {
|
||||||
// setup the logger for the auto_compressor
|
// setup the logger for the synapse_auto_compressor
|
||||||
// The default can be overwritten with RUST_LOG
|
// The default can be overwritten with RUST_LOG
|
||||||
// see the README for more information
|
// see the README for more information
|
||||||
if env::var("RUST_LOG").is_err() {
|
if env::var("RUST_LOG").is_err() {
|
||||||
@@ -366,7 +366,7 @@ pub fn setup_logger() {
|
|||||||
// default to printing the debug information for both packages being tested
|
// default to printing the debug information for both packages being tested
|
||||||
// (Note that just setting the global level to debug will log every sql transaction)
|
// (Note that just setting the global level to debug will log every sql transaction)
|
||||||
log_builder.filter_module("synapse_compress_state", LevelFilter::Debug);
|
log_builder.filter_module("synapse_compress_state", LevelFilter::Debug);
|
||||||
log_builder.filter_module("auto_compressor", LevelFilter::Debug);
|
log_builder.filter_module("synapse_auto_compressor", LevelFilter::Debug);
|
||||||
// use try_init() incase the logger has been setup by some previous test
|
// use try_init() incase the logger has been setup by some previous test
|
||||||
let _ = log_builder.try_init();
|
let _ = log_builder.try_init();
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -61,11 +61,12 @@ fn synapse_auto_compressor(_py: Python, m: &PyModule) -> PyResult<()> {
|
|||||||
let _ = pyo3_log::Logger::default()
|
let _ = pyo3_log::Logger::default()
|
||||||
// don't send out anything lower than a warning from other crates
|
// don't send out anything lower than a warning from other crates
|
||||||
.filter(LevelFilter::Warn)
|
.filter(LevelFilter::Warn)
|
||||||
// don't log warnings from synapse_compress_state, the auto_compressor handles these
|
// don't log warnings from synapse_compress_state, the
|
||||||
// situations and provides better log messages
|
// synapse_auto_compressor handles these situations and provides better
|
||||||
|
// log messages
|
||||||
.filter_target("synapse_compress_state".to_owned(), LevelFilter::Error)
|
.filter_target("synapse_compress_state".to_owned(), LevelFilter::Error)
|
||||||
// log info and above for the auto_compressor
|
// log info and above for the synapse_auto_compressor
|
||||||
.filter_target("auto_compressor".to_owned(), LevelFilter::Debug)
|
.filter_target("synapse_auto_compressor".to_owned(), LevelFilter::Debug)
|
||||||
.install();
|
.install();
|
||||||
// ensure any panics produce error messages in the log
|
// ensure any panics produce error messages in the log
|
||||||
log_panics::init();
|
log_panics::init();
|
||||||
@@ -92,7 +93,7 @@ fn synapse_auto_compressor(_py: Python, m: &PyModule) -> PyResult<()> {
|
|||||||
number_of_chunks: i64,
|
number_of_chunks: i64,
|
||||||
) -> PyResult<()> {
|
) -> PyResult<()> {
|
||||||
// Announce the start of the program to the logs
|
// Announce the start of the program to the logs
|
||||||
log::info!("auto_compressor started");
|
log::info!("synapse_auto_compressor started");
|
||||||
|
|
||||||
// Parse the default_level string into a LevelInfo struct
|
// Parse the default_level string into a LevelInfo struct
|
||||||
let default_levels: LevelInfo = match default_levels.parse() {
|
let default_levels: LevelInfo = match default_levels.parse() {
|
||||||
@@ -120,7 +121,7 @@ fn synapse_auto_compressor(_py: Python, m: &PyModule) -> PyResult<()> {
|
|||||||
return Err(PyErr::new::<PyRuntimeError, _>(format!("{:?}", e)));
|
return Err(PyErr::new::<PyRuntimeError, _>(format!("{:?}", e)));
|
||||||
}
|
}
|
||||||
|
|
||||||
log::info!("auto_compressor finished");
|
log::info!("synapse_auto_compressor finished");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -26,13 +26,13 @@ use synapse_auto_compressor::{manager, state_saving, LevelInfo};
|
|||||||
|
|
||||||
/// Execution starts here
|
/// Execution starts here
|
||||||
fn main() {
|
fn main() {
|
||||||
// setup the logger for the auto_compressor
|
// setup the logger for the synapse_auto_compressor
|
||||||
// The default can be overwritten with RUST_LOG
|
// The default can be overwritten with RUST_LOG
|
||||||
// see the README for more information
|
// see the README for more information
|
||||||
let log_file = OpenOptions::new()
|
let log_file = OpenOptions::new()
|
||||||
.append(true)
|
.append(true)
|
||||||
.create(true)
|
.create(true)
|
||||||
.open("auto_compressor.log")
|
.open("synapse_auto_compressor.log")
|
||||||
.unwrap_or_else(|e| panic!("Error occured while opening the log file: {}", e));
|
.unwrap_or_else(|e| panic!("Error occured while opening the log file: {}", e));
|
||||||
|
|
||||||
if env::var("RUST_LOG").is_err() {
|
if env::var("RUST_LOG").is_err() {
|
||||||
@@ -41,8 +41,8 @@ fn main() {
|
|||||||
log_builder.filter_module("panic", LevelFilter::Error);
|
log_builder.filter_module("panic", LevelFilter::Error);
|
||||||
// Only output errors from the synapse_compress state library
|
// Only output errors from the synapse_compress state library
|
||||||
log_builder.filter_module("synapse_compress_state", LevelFilter::Error);
|
log_builder.filter_module("synapse_compress_state", LevelFilter::Error);
|
||||||
// Output log levels info and above from auto_compressor
|
// Output log levels info and above from synapse_auto_compressor
|
||||||
log_builder.filter_module("auto_compressor", LevelFilter::Info);
|
log_builder.filter_module("synapse_auto_compressor", LevelFilter::Info);
|
||||||
log_builder.init();
|
log_builder.init();
|
||||||
} else {
|
} else {
|
||||||
// If RUST_LOG was set then use that
|
// If RUST_LOG was set then use that
|
||||||
@@ -54,7 +54,7 @@ fn main() {
|
|||||||
}
|
}
|
||||||
log_panics::init();
|
log_panics::init();
|
||||||
// Announce the start of the program to the logs
|
// Announce the start of the program to the logs
|
||||||
log::info!("auto_compressor started");
|
log::info!("synapse_auto_compressor started");
|
||||||
|
|
||||||
// parse the command line arguments using the clap crate
|
// parse the command line arguments using the clap crate
|
||||||
let arguments = App::new(crate_name!())
|
let arguments = App::new(crate_name!())
|
||||||
@@ -113,7 +113,7 @@ fn main() {
|
|||||||
Arg::with_name("number_of_chunks")
|
Arg::with_name("number_of_chunks")
|
||||||
.short("n")
|
.short("n")
|
||||||
.value_name("CHUNKS_TO_COMPRESS")
|
.value_name("CHUNKS_TO_COMPRESS")
|
||||||
.help("The number of chunks to compress")
|
.help("The number of chunks to compress")
|
||||||
.long_help(concat!(
|
.long_help(concat!(
|
||||||
"This many chunks of the database will be compressed. The higher this number is set to, ",
|
"This many chunks of the database will be compressed. The higher this number is set to, ",
|
||||||
"the longer the compressor will run for."
|
"the longer the compressor will run for."
|
||||||
@@ -155,5 +155,5 @@ fn main() {
|
|||||||
manager::compress_chunks_of_database(db_url, chunk_size, &default_levels.0, number_of_chunks)
|
manager::compress_chunks_of_database(db_url, chunk_size, &default_levels.0, number_of_chunks)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
log::info!("auto_compressor finished");
|
log::info!("synapse_auto_compressor finished");
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user