Build: #17 failed

Job: Test Tasks MPI Many Linux 2.28 Rocky 8.10 Py3.10 failed

Job result summary

Completed
Duration
62 minutes
Total tests
167

Tests

  • 167 tests in total
  • 2 tests failed
  • 2 failures are new
  • 9 tests were quarantined / skipped
  • 53 minutes taken in total.
New test failures 2
Status Test Duration
Collapse Failed test_0_MPIInterface test_PyParallelImagerHelper_interface
< 1 sec
TypeError: 'NoneType' object is not subscriptable
self = <casampi.tests.test_casampi.test_0_MPIInterface testMethod=test_PyParallelImagerHelper_interface>

    def test_PyParallelImagerHelper_interface(self):
    
        # Get cluster (getCluster should automatically initialize it)
        self.sc = MPIInterface.getCluster()
        self.CL = self.sc._cluster
(21 more lines...)
Collapse Failed test_MPICommandServer test_server_fake_timeout_busy_wait
< 1 sec
AttributeError: module 'sys' has no attribute 'set_int_max_str_digits'
self = <casampi.tests.test_casampi.test_MPICommandServer testMethod=test_server_fake_timeout_busy_wait>

        def test_server_fake_timeout_busy_wait(self):
    
            mon = MPIMonitorClient()
            ini_online = len(list(mon.get_server_rank_online()))
            self.assertTrue(ini_online > 0,
(12 more lines...)

Error summary

The build generated some errors. See the full build log for more details.

fatal: could not read Username for 'https://open-bitbucket.nrao.edu': No such device or address
Error response from daemon: No such container: wheel-container-test
Error response from daemon: No such container: wheel-container-test
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 6002k  100 6002k    0     0  23.5M      0 --:--:-- --:--:-- --:--:-- 23.9M
[6282] Failed to execute script 'atlutil' due to unhandled exception!
Traceback (most recent call last):
  File "atlutil.py", line 200, in <module>
  File "atlutil.py", line 165, in has_fix_version
  File "json/__init__.py", line 354, in loads
  File "json/decoder.py", line 339, in decode
  File "json/decoder.py", line 357, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 12 column 1 (char 11)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Already on 'master'
Note: switching to 'origin/CAS-14029'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 72f4abe rename __mpi_runtime_config.threaded to __mpi_runtime_config.threads
2025-04-07 16:40:49        SEVERE        MPICommandClient::stop_services::MPICommandClient::stop_services::casa        Aborting command request with id# 70: {'command': 'pow(a,b)', 'parameters': {'a': 10, 'b': 100000000000000000}, 'mode': 'eval', 'id': 70, 'server': 1, 'status': 'request sent'}
2025-04-07 16:40:52        SEVERE        MPICommandClient::stop_services::MPICommandClient::stop_services::casa        MPIServers with rank [1] are in timeout condition, skipping MPI_Finalize()
2025-04-07 16:40:52        SEVERE        MPICommandClient::stop_services::MPICommandClient::stop_services::casa        Not possible to finalize gracefully... calling Aborting MPI environment
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
  Proc: [[27131,1],0]
  Errorcode: 0

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
prterun has exited due to process rank 0 with PID 5378 on node 441a44cd7322 exiting
improperly. There are three reasons this could occur:

1. this process did not call "init" before exiting, but others in the
job did. This can cause a job to hang indefinitely while it waits for
all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

3. this process called "MPI_Abort" or "prte_abort" and the mca
parameter prte_create_session_dirs is set to false. In this case, the
run-time cannot detect that the abort call was an abnormal
termination. Hence, the only error message you will receive is this
one.

This may have caused other processes in the application to be
terminated by signals sent by prterun (as reported here).

You can avoid this message by specifying -quiet on the prterun command
line.
--------------------------------------------------------------------------