Apache Cassandra | Apache Cassandra Documentation (2024)

Docker

The Docker approach is recommended for running Python distributed tests.The behavior will be more repeatable, matching the same environment asthe official testing on Cassandra CI.

Setup Docker

If you are on Linux, you need to install Docker using the system packagemanager.

If you are on MacOS, you can use eitherDocker Desktop or someother approach.

Pull the Docker image

The Docker image used on the official Cassandra CI can be found inthis repository.You can use eitherdocker/testing/ubuntu2004_j11.dockerordocker/testing/ubuntu2004_j11_w_dependencies.dockerThe second choice has prefetched dependencies for building each mainCassandra branch. Those images can be either builtlocally (as per instructions in the GitHub repo) or pulled from theDocker Hub - seehere.

First, pull the image from Docker Hub (it will either fetch orupdate the image you previously fetched):

docker pull apache/cassandra-testing-ubuntu2004-java11-w-dependencies

Start the container

docker run -di -m 8G --cpus 4 \--mount type=bind,source=/path/to/cassandra/project,target=/home/cassandra/cassandra \--mount type=bind,source=/path/to/cassandra-dtest,target=/home/cassandra/cassandra-dtest \--name test \apache/cassandra-testing-ubuntu2004-java11-w-dependencies \dumb-init bash

Hint

Many distributed tests are not that demanding in terms of resources- 4G / 2 cores should be enough to start one node. However, some testsreally run multiple nodes, and some of them are automatically skippedif the machine has less than 32G (there is a way to force running themthough). Usually 8G / 4 cores is a convenient choice which is enoughfor most of the tests.

To log into the container, use the following docker exec command:

docker exec -it `docker container ls -f name=test -q` bash

Setup Python environment

The tests are implemented in Python, so a Python virtual environment(see here for details)with all the required dependencies is good to be set up. If you arefamiliar with the Python ecosystem, you know what it is all about.Otherwise, follow the instructions; it should be enough to run thetests.

For Python distributed tests do:

cd /home/cassandra/cassandra-dtestvirtualenv --python=python3 --clear --always-copy ../dtest-venvsource ../dtest-venv/bin/activateCASS_DRIVER_NO_CYTHON=1 pip install -r requirements.txt

For CQLSH tests, replace some paths:

cd /home/cassandra/cassandra/pylibvirtualenv --python=python3 --clear --always-copy ../../cqlsh-venvsource ../../cqlsh-venv/bin/activateCASS_DRIVER_NO_CYTHON=1 pip install -r requirements.txt

Hint

You may wonder why this weird environment variable CASS_DRIVER_NO_CYTHON=1 was added - it is not required at all. Still, it allows avoiding the compilation of Cassandra driver with Cython, which is not needed unless you want to test that Cython compiled driver. In the end, it speeds up the installation of the requirements significantly from the order of minutes to the order of seconds.

The above commands are also helpful for importing those test projectsinto your IDE. In that case, you need to run them on your hostsystem rather than in Docker container. For example, when you open theproject in IntelliJ, the Python plugin may ask you to select the runtimeenvironment. In this case, choose the existing virtualenvbased environment and point to bin/python under the createddtest-venv directory (or cqlsh-venv, or whichever name you havechosen).

Whether you want to play with Python distributed tests or CQLSH tests,you need to select the right virtual environment. Remember to switch tothe one you want:

deactivatesource /home/cassandra/dtest-venv/bin/activate

or

deactivatesource /home/cassandra/cqlsh-venv/bin/activate

CQLSH tests

CQLSH tests are located in the pylib/cqlshlib/test directory.There is a helper script that runs the tests for you. Inparticular, it builds the Cassandra project, creates a virtualenvironment, runs the CCM cluster, executes the tests, and eventuallyremoves the cluster. You find the script in the pylib directory. Theonly argument it requires is the Cassandra project directory:

cassandra@b69a382da7cd:~/cassandra/pylib$ ./cassandra-cqlsh-tests.sh /home/cassandra/cassandra

Refer to the README for further information.

Running selected tests

You may run all test tests from the selected file by passing thatfile as an argument:

~/cassandra/pylib/cqlshlib$ pytest test/test_constants.py

To run a specific test case, you need to specify the module, class name,and the test name, for example:

~/cassandra/pylib/cqlshlib$ pytest cqlshlib.test.test_cqlsh_output:TestCqlshOutput.test_boolean_output

Python distributed tests

One way of doing integration or system testing at larger scale isusing dtest (Cassandra distributed test).These dtests automatically setup Cassandra clusters with certain configurations and simulate use cases you want to test.

The best way to learn how to write dtests is probably by reading theintroduction "http://www.datastax.com/dev/blog/how-to-write-a-dtest[Howto Write a Dtest]".Looking at existing, recently updated tests in the project is another good activity.New tests must follow certainstyleconventions that are checked before contributions are accepted.In contrast to Cassandra, dtest issues and pull requests are managed ongithub, therefore you should make sure to link any created dtests in yourCassandra ticket and also refer to the ticket number in your dtest PR.

Creating a good dtest can be tough, but it should not prevent you fromsubmitting patches!Please ask in the corresponding JIRA ticket how to write a good dtest for the patch.In most cases a reviewer or committer will able to support you, and in some cases they may offer to write a dtest for you.

Run the tests - quick examples

Note that you need to set up and activate the virtualenv for DTests(see Setup Python environment section for details). Tests are implementedwith the PyTest framework, so you use the pytest command to run them.Let’s run some tests:

pytest --cassandra-dir=/home/cassandra/cassandra schema_metadata_test.py::TestSchemaMetadata::test_clustering_order

That command runs the test_clustering_order test case fromTestSchemaMetadata class, located in the schema_metadata_test.pyfile. You may also provide the file and class to run all test cases fromthat class:

pytest --cassandra-dir=/home/cassandra/cassandra schema_metadata_test.py::TestSchemaMetadata

or just the file name to run all test cases from all classes defined in that file.

pytest --cassandra-dir=/home/cassandra/cassandra schema_metadata_test.py

You may also specify more individual targets:

pytest --cassandra-dir=/home/cassandra/cassandra schema_metadata_test.py::TestSchemaMetadata::test_basic_table_datatype schema_metadata_test.py::TestSchemaMetadata::test_udf

If you run pytest without specifying any test, it considers running allthe tests it can find. More on the test selectionhereYou probably noticed that --cassandra-dir=/home/cassandra/cassandrais constantly added to the command line. It isone of the cassandra-dtest custom arguments - the mandatory one -unless it is defined, you cannot run any Cassandra dtest.

Setting up PyTest

All the possible options can be listed by invoking pytest --help. Yousee tons of possible parameters - some of them are native PyTestoptions, and some come from Cassandra DTest. When you look carefully atthe help note, you notice that some commonly used options, usually fixedfor all the invocations, can be put into the pytest.ini file. Inparticular, it is quite practical to define the following:

cassandra_dir = /home/cassandra/cassandralog_cli = Truelog_cli_level = DEBUG

so that you do not have to provide --cassandra-dir param each time yourun a test. The other two options set up console logging - remove themif you want logs stored only in log files.

Running tests with specific configuration

There are a couple of options to enforce exact test configuration (theirnames are quite self-explanatory):

  • --use-vnodes

  • --num-token=xxx - enables the support of virtual nodes with a certainnumber of tokens

  • --use-off-heap-memtables - use off-heap memtables instead of thedefault heap-based

  • `--data-dir-count-per-instance=xxx - the number of data directoriesconfigured per each instance

Note that the list can grow in the future as new predefinedconfigurations can be added to dtests. It is also possible to pass extraJava properties to each Cassandra node started by the tests - definethose options in the JVM_EXTRA_OPTS environment variable beforerunning the test.

Listing the tests

You can do a dry run, so that the tests are only listed and notinvoked. To do that, add --collect-only to the pytest command.That additional -q option will print the results in the sameformat as you would pass the test name to the pytest command:

pytest --collect-only -q

lists all the tests pytest would run if no particular test is specified.Similarly, to list test cases in some class, do:

$ pytest --collect-only -q schema_metadata_test.py::TestSchemaMetadataschema_metadata_test.py::TestSchemaMetadata::test_creating_and_dropping_keyspaceschema_metadata_test.py::TestSchemaMetadata::test_creating_and_dropping_tableschema_metadata_test.py::TestSchemaMetadata::test_creating_and_dropping_table_with_2ary_indexesschema_metadata_test.py::TestSchemaMetadata::test_creating_and_dropping_user_typesschema_metadata_test.py::TestSchemaMetadata::test_creating_and_dropping_udfschema_metadata_test.py::TestSchemaMetadata::test_creating_and_dropping_udaschema_metadata_test.py::TestSchemaMetadata::test_basic_table_datatypeschema_metadata_test.py::TestSchemaMetadata::test_collection_table_datatypeschema_metadata_test.py::TestSchemaMetadata::test_clustering_orderschema_metadata_test.py::TestSchemaMetadata::test_compact_storageschema_metadata_test.py::TestSchemaMetadata::test_compact_storage_compositeschema_metadata_test.py::TestSchemaMetadata::test_nondefault_table_settingsschema_metadata_test.py::TestSchemaMetadata::test_indexesschema_metadata_test.py::TestSchemaMetadata::test_durable_writesschema_metadata_test.py::TestSchemaMetadata::test_static_columnschema_metadata_test.py::TestSchemaMetadata::test_udt_tableschema_metadata_test.py::TestSchemaMetadata::test_udfschema_metadata_test.py::TestSchemaMetadata::test_uda

You can copy/paste the selected test case to the pytest command torun it.

Filtering tests

Based on configuration

Most tests run with any configuration, but a subset of tests (testcases) only run if a specific configuration is used. In particular,there are tests annotated with:

  • @pytest.mark.vnodes - the test is only invoked when the support ofvirtual nodes is enabled

  • @pytest.mark.no_vnodes - the test is only invoked when the supportof virtual nodes is disabled

  • @pytest.mark.no_offheap_memtables - the test is only invoked ifoff-heap memtables are not used

Note that enabling or disabling vnodes is obviously mutuallyexclusive. If a test is marked to run only with vnodes, it does notrun when vnodes is disabled; similarly, when a test is marked to runonly without vnodes, it does not run when vnodes is enabled -therefore, there are always some tests which would not run with a singleconfiguration.

Based on resource usage

There are also tests marked with:

@pytest.mark.resource_intensive

which means that the test requires more resources than a regular testbecause it usually starts a cluster of several nodes. The meaning ofresource-intensive is hardcoded to 32GB of available memory, and unlessyour machine or docker container has at least that amount of RAM, suchtest is skipped. There are a couple of arguments that allow for somecontrol of that automatic exclusion:

  • --force-resource-intensive-tests - forces the execution of testsmarked as resource_intensive, regardless of whether there is enoughmemory available or not

  • --only-resource-intensive-tests - only run tests marked asresource_intensive - it makes all the tests withoutresource_intensive annotation to be filtered out; technically, it isequivalent to passing native PyTest argument: -m resource_intensive

  • --skip-resource-intensive-tests - skip all tests marked asresource_intensive - it is the opposite argument to the previous one,and it is equivalent to the PyTest native argument: -m 'not resource_intensive'

Based on the test type

Upgrade tests are marked with:

@pytest.mark.upgrade_test

Those tests are not invoked by default at all (just like runningPyTest with -m 'not upgrade_test'), and you have to add some extraoptions to run them:* --execute-upgrade-tests - enables execution of upgrade tests alongwith other tests - when this option is added, the upgrade tests are notfiltered out* --execute-upgrade-tests-only - execute only upgrade tests and filterout all other tests which do not have @pytest.mark.upgrade_testannotation (just like running PyTest with -m 'upgrade_test')

Filtering examples

It does not matter whether you want to invoke individual tests or alltests or whether you only want to list them; the above filtering rulesapply. So by using --collect-only option, you can learn which testswould be invoked.

To list all the applicable tests for the current configuration, use thefollowing command:

pytest --collect-only -q --execute-upgrade-tests --force-resource-intensive-tests

List tests specific to vnodes (which would only run if vnodes are enabled):

pytest --collect-only -q --execute-upgrade-tests --force-resource-intensive-tests --use-vnodes -m vnodes

List tests that are not resource-intensive

pytest --collect-only -q --execute-upgrade-tests --skip-resource-intensive-tests

Upgrade tests

Upgrade tests always involve more than one product version. There aretwo kinds of upgrade tests regarding the product versions they span -let’s call them fixed and generated.

In case of fixed tests, the origin and target versions are hardcoded.They look pretty usual, for example:

pytest --collect-only -q --execute-upgrade-tests --execute-upgrade-tests-only upgrade_tests/upgrade_supercolumns_test.py

prints:

upgrade_tests/upgrade_supercolumns_test.py::TestSCUpgrade::test_upgrade_super_columns_through_all_versionsupgrade_tests/upgrade_supercolumns_test.py::TestSCUpgrade::test_upgrade_super_columns_through_limited_versions

When you look into the code, you will see the fixed upgrade path:

def test_upgrade_super_columns_through_all_versions(self): self._upgrade_super_columns_through_versions_test(upgrade_path=[indev_2_2_x, indev_3_0_x, indev_3_11_x, indev_trunk])

The generated upgrade tests are listed several times - the firstoccurrence of the test case is a generic test definition, and thenit is repeated many times in generated test classes. For example:

pytest --cassandra-dir=/home/cassandra/cassandra --collect-only -q --execute-upgrade-tests --execute-upgrade-tests-only upgrade_tests/cql_tests.py -k test_set

prints:

upgrade_tests/cql_tests.py::cls::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_2_2_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_current_3_11_x_To_indev_3_11_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_current_4_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_2_2_x_To_indev_3_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_2_2_x_To_indev_3_11_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_3_0_x_To_indev_3_11_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_3_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_3_11_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_4_0_x_To_indev_trunk::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_2_2_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_current_3_0_x_To_indev_3_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_current_3_11_x_To_indev_3_11_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_current_4_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_2_2_x_To_indev_3_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_2_2_x_To_indev_3_11_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_3_0_x_To_indev_3_11_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_3_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_3_11_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_4_0_x_To_indev_trunk::test_set

In this example, the test case name is just test_set, and the classname is TestCQL - the suffix of the class name is automaticallygenerated from the provided specification. The first component is thecluster specification - there are two variants: Nodes2RF1 and Nodes3RF3- they denote that the upgrade is tested on 2 nodes cluster with akeyspace using replication factor = 1. Analogously the second variantuses 3 nodes cluster with RF = 3.

Then, there is the upgrade specification - for example,Upgrade_indev_3_11_x_To_indev_4_0_x - which means that this testupgrades from the development version of Cassandra 3.11 to thedevelopment version of Cassandra 4.0 - the meaning of indev/currentand where they are defined is explained later.

When you look into the implementation, you notice that such upgrade testclasses inherit from UpgradeTester class, and they have thespecifications defined at the end of the file. In this particular case,it is something like:

topology_specs = [ {'NODES': 3, 'RF': 3, 'CL': ConsistencyLevel.ALL}, {'NODES': 2, 'RF': 1},]specs = [dict(s, UPGRADE_PATH=p, __test__=True)for s, p in itertools.product(topology_specs, build_upgrade_pairs())]

As you can see, there is a list of the cluster specifications andthe cross product is calculated with upgrade paths returned by thebuild_upgrade_pairs() function. That list of specifications is used todynamically generate upgrade tests.

Suppose you need to test something specifically for your scenario. Inthat case, you can add more cluster specifications, like a test with 1node or a test with 5 nodes with some different replication factor orconsistency level. The build_upgrade_pairs() returns the list ofupgrade paths (actually just the origin and target version). That listis generated according to the upgrade manifest.

Upgrade manifest

The upgrade manifest is a file where all the upgrade paths are defined.It is a regular Python file located atupgrade_tests/upgrade_manifest.py.As you noticed, Cassandra origin and target version descriptionsmentioned in the upgrade test consist of indev or current prefixfollowed by version string. The definitions of each such versiondescription can be found in the manifest, for example:

indev_3_11_x = VersionMeta(name='indev_3_11_x', family=CASSANDRA_3_11, variant='indev', version='github:apache/cassandra-3.11', min_proto_v=3, max_proto_v=4, java_versions=(8,))current_3_11_x = VersionMeta(name='current_3_11_x', family=CASSANDRA_3_11, variant='current', version='3.11.10', min_proto_v=3, max_proto_v=4, java_versions=(8,))

There are a couple of different properties which describe those twoversions:

  • name - is a name as you can see in the names of the generatedtest classes

  • family - families is an enumeration defined in the beginning ofthe upgrade manifest - say family CASSANDRA_3_11 is just a string"3.11". Some major features were introduced or removed with newversion families, and therefore some checks can be done or some featurescan be enabled/disabled according to that, for example:

if self.cluster.version() < CASSANDRA_4_0: node1.nodetool("enablethrift")

But it is also used to determine whether our checked-out version matchesthe target version in the upgrade pair (more on that later)

  • variant and version - there are indev or current variants:

    • indev variant means that the development version of Cassandrawill be used. That is, that version is checked out from the Gitrepository and built before running the upgrade (CCM does it). In thiscase, the version string is specified as github:apache/cassandra-3.11,which means that it will checkout the cassandra-3.11 branch from theGitHub repository whose alias is apache. Aliases are defined in CCMconfiguration file, usually located at ~/.ccm/config - in thisparticular case, it could be something like:

[aliases]apache:git@github.com:apache/cassandra.git
  • current variant means that a released version of Cassandra willbe used. It means that Cassandra distribution denoted by the specifiedversion (3.11.10 in this case) is downloaded from the Apacherepository/mirror - again, the repository can be defined in CCMconfiguration file, under repositories section, something like:

[repositories]cassandra=https://archive.apache.org/dist/cassandra
  • min_proto_v, max_proto_v - the range of usable Cassandra driverprotocol versions

  • java_versions - supported Java versions

The possible upgrade paths are defined later in the upgrade manifest -when you scroll the file, you will find the MANIFEST map which maylook similar to:

MANIFEST = {current_2_1_x: [indev_2_2_x, indev_3_0_x, indev_3_11_x],current_2_2_x: [indev_2_2_x, indev_3_0_x, indev_3_11_x],current_3_0_x: [indev_3_0_x, indev_3_11_x, indev_4_0_x],current_3_11_x: [indev_3_11_x, indev_4_0_x],current_4_0_x: [indev_4_0_x, indev_trunk], indev_2_2_x: [indev_3_0_x, indev_3_11_x], indev_3_0_x: [indev_3_11_x, indev_4_0_x], indev_3_11_x: [indev_4_0_x], indev_4_0_x: [indev_trunk]}

It is a simple map where for the origin version (as a key), there isa list of possible target versions (as a value). Say:

current_4_0_x: [indev_4_0_x, indev_trunk]

means that upgrades from current_4_0_x toindev_4_0_x and from current_4_0_x to indev_trunk will be considered.You may make changes to that upgrade scenario in your development branchaccording to your needs.There is a command-line option that allows filtering across upgradescenarios: --upgrade-version-selection=xxx. The possible values forthat options are as follows:

  • indev - which is the default, only selects those upgrade scenarioswhere the target version is in indev variant

  • both - selects upgrade paths where either both origin and targetversions are in the same variant or have the same version family

  • releases - selects upgrade paths between versions in current variantor from the current to indev variant if both have the same versionfamily

  • all - no filtering at all - all variants are tested

Running upgrades with local distribution

The upgrade test can use your local Cassandra distribution, the onespecified by the cassandra_dir property, as the target version if thefollowing preconditions are satisfied:

  • the target version is in the indev variant,

  • the version family set in the version description matches the versionfamily of your local distribution

For example, your local distribution is branched off from thecassandra-4.0 branch, likely matching indev_4_0_x. It means that theupgrade path with target version indev_4_0_x uses your localdistribution.There is a handy command line option which will filter out all theupgrade tests which do not match the local distribution:--upgrade-target-version-only. Given you are on cassandra-4.0 branch,when applied to the previous example, it will be something similar to:

pytest --cassandra-dir=/home/cassandra/cassandra --collect-only -q --execute-upgrade-tests --execute-upgrade-tests-only upgrade_tests/cql_tests.py -k test_set --upgrade-target-version-only

prints:

upgrade_tests/cql_tests.py::cls::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_current_4_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_3_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes3RF3_Upgrade_indev_3_11_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_current_4_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_3_0_x_To_indev_4_0_x::test_setupgrade_tests/cql_tests.py::TestCQLNodes2RF1_Upgrade_indev_3_11_x_To_indev_4_0_x::test_set

You can see that the upgrade tests were limited to the ones whose targetversion is indev and family matches 4.0.

Logging

A couple of common PyTest arguments control what is logged to the fileand the console from the Python test code. Those arguments which startfrom --log-xxx are pretty well described in the help message(pytest --help) and in PyTest documentation, so it will not be discussedfurther. However, most of the tests start with the cluster ofCassandra nodes, and each node generates its own logging information andhas its own data directories.

By default the logs from the nodes are copied to the unique directorycreated under logs subdirectory under root of dtest project. For example:

(venv) cassandra@b69a382da7cd:~/cassandra-dtest$ ls logs/ -11627455923457_test_set1627456019264_test_set1627456474949_test_set1627456527540_test_listlast

The last item is a symbolic link to the directory containing the logsfrom the last executed test. Each such directory includes logs from eachstarted node - system, debug, GC as well as standard streams registeredupon each time the node was started:

(venv) cassandra@b69a382da7cd:~/cassandra-dtest$ ls logs/last -1node1.lognode1_debug.lognode1_gc.lognode1_startup-1627456480.3398306-stderr.lognode1_startup-1627456480.3398306-stdout.lognode1_startup-1627456507.2186499-stderr.lognode1_startup-1627456507.2186499-stdout.lognode2.lognode2_debug.lognode2_gc.lognode2_startup-1627456481.10463-stderr.lognode2_startup-1627456481.10463-stdout.log

Those log files are not collected if --delete-logs command-line optionis added to PyTest. The nodes also produce data files which may besometimes useful to examine to resolve some failures. Those files areusually deleted when the test is completed, but there are some optionsto control that behavior:

  • --keep-test-dir - keep the whole CCM directory with data files andlogs when the test completes

  • --keep-failed-test-dir – only keep that directory when the test hasfailed

Now, how to find where is that directory for the certain test - you needto grab that information from the test logs - for example, you may add-s option to the command line and then look for "dtest_setup INFO"messages. For example:

05:56:06,383 dtest_setup INFO cluster ccm directory: /tmp/dtest-0onwvgkr

says that the cluster work directory is /tmp/dtest-0onwvgkr, and allnode directories can be found under the test subdirectory:

(venv) cassandra@b69a382da7cd:~/cassandra-dtest$ ls /tmp/dtest-0onwvgkr/test -1cluster.confnode1node2
Apache Cassandra | Apache Cassandra Documentation (2024)

References

Top Articles
Latest Posts
Article information

Author: Tyson Zemlak

Last Updated:

Views: 6060

Rating: 4.2 / 5 (63 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Tyson Zemlak

Birthday: 1992-03-17

Address: Apt. 662 96191 Quigley Dam, Kubview, MA 42013

Phone: +441678032891

Job: Community-Services Orchestrator

Hobby: Coffee roasting, Calligraphy, Metalworking, Fashion, Vehicle restoration, Shopping, Photography

Introduction: My name is Tyson Zemlak, I am a excited, light, sparkling, super, open, fair, magnificent person who loves writing and wants to share my knowledge and understanding with you.