Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fatal Python error: Segmentation faultFatal Python error: Segmentation fault #201

Open
dbl001 opened this issue Sep 28, 2022 · 1 comment

Comments

@dbl001
Copy link

dbl001 commented Sep 28, 2022

I am running the Top2Vec code on MacOS 12.6 with Python 3.8, tensorflow-macos and tensorflow-metal.
I'm getting a segmentation exception in nn_descent:
Screen Shot 2022-09-27 at 2 12 54 PM

Here's the packages:

 % pip show pynndescent   
Name: pynndescent
Version: 0.5.7
Summary: Nearest Neighbor Descent
Home-page: http://github.com/lmcinnes/pynndescent
Author: Leland McInnes
Author-email: [email protected]
License: BSD
Location: /Users/davidlaxer/tensorflow/lib/python3.8/site-packages
Requires: joblib, llvmlite, numba, scikit-learn, scipy
Required-by: umap-learn
(tensorflow) davidlaxer@x86_64-apple-darwin13 Top2Vec % pip show umap-learn
Name: umap-learn
Version: 0.5.3
Summary: Uniform Manifold Approximation and Projection
Home-page: http://github.com/lmcinnes/umap
Author: 
Author-email: 
License: BSD
Location: /Users/davidlaxer/tensorflow/lib/python3.8/site-packages
Requires: numba, numpy, pynndescent, scikit-learn, scipy, tqdm
Required-by: tensorflow-similarity, top2vec

 % pip show numba
Name: numba
Version: 0.56.2
Summary: compiling Python code using LLVM
Home-page: https://numba.pydata.org
Author: 
Author-email: 
License: BSD
Location: /Users/davidlaxer/tensorflow/lib/python3.8/site-packages
Requires: importlib-metadata, llvmlite, numpy, setuptools
Required-by: datashader, pynndescent, umap-learn

Here's the stack before the exception:

knn_search_index = NNDescent(
    X,
    n_neighbors=n_neighbors,
    metric=metric,
    metric_kwds=metric_kwds,
    random_state=random_state,
    n_trees=n_trees,
    n_iters=n_iters,
    max_candidates=60,
    low_memory=low_memory,
    n_jobs=n_jobs,
    verbose=verbose,
    compressed=False,
)

_deepcopy_atomic, copy.py:183
deepcopy, copy.py:146
_deepcopy_dict, copy.py:230
deepcopy, copy.py:146
_deepcopy_dict, copy.py:230
deepcopy, copy.py:146
_reconstruct, copy.py:270
deepcopy, copy.py:172
_deepcopy_dict, copy.py:230
deepcopy, copy.py:146
_reconstruct, copy.py:270
deepcopy, copy.py:172
_deepcopy_dict, copy.py:230
deepcopy, copy.py:146
_reconstruct, copy.py:270
deepcopy, copy.py:172
_deepcopy_dict, copy.py:230
deepcopy, copy.py:146
_deepcopy_dict, copy.py:230
deepcopy, copy.py:146
_reconstruct, copy.py:270
deepcopy, copy.py:172
slice_size, array_analysis.py:1817
to_shape, array_analysis.py:2074
_index_to_shape, array_analysis.py:2090
_analyze_op_getitem, array_analysis.py:2139
guard, ir_utils.py:1527
_analyze_expr, array_analysis.py:1571
_analyze_inst, array_analysis.py:1311
_determine_transform, array_analysis.py:1273
_run_on_blocks, array_analysis.py:1198
run, array_analysis.py:1178
_pre_run, parfor.py:2859
run, parfor.py:2867
run_pass, typed_passes.py:306
check, compiler_machinery.py:273
_runPass, compiler_machinery.py:311
_acquire_compile_lock, compiler_lock.py:35
run, compiler_machinery.py:356
_compile_core, compiler.py:486
_compile_ir, compiler.py:527
compile_ir, compiler.py:462
compile_ir, compiler.py:779
_create_gufunc_for_parfor_body, parfor_lowering.py:1509
_lower_parfor_parallel, parfor_lowering.py:316
lower_inst, lowering.py:567
lower_block, lowering.py:265
lower_function_body, lowering.py:251
lower_normal_function, lowering.py:222
lower, lowering.py:168
run_pass, typed_passes.py:394
check, compiler_machinery.py:273
_runPass, compiler_machinery.py:311
_acquire_compile_lock, compiler_lock.py:35
run, compiler_machinery.py:356
_compile_core, compiler.py:486
_compile_bytecode, compiler.py:520
compile_extra, compiler.py:452
compile_extra, compiler.py:716
_compile_core, dispatcher.py:152
_compile_cached, dispatcher.py:139
compile, dispatcher.py:125
compile, dispatcher.py:965
get_call_template, dispatcher.py:363
get_call_type, functions.py:540
_resolve_user_function_type, context.py:248
resolve_function_type, context.py:196
resolve_call, typeinfer.py:1555
resolve, typeinfer.py:601
__call__, typeinfer.py:578
propagate, typeinfer.py:155
propagate, typeinfer.py:1078
type_inference_stage, typed_passes.py:83
run_pass, typed_passes.py:105
check, compiler_machinery.py:273
_runPass, compiler_machinery.py:311
_acquire_compile_lock, compiler_lock.py:35
run, compiler_machinery.py:356
_compile_core, compiler.py:486
_compile_bytecode, compiler.py:520
compile_extra, compiler.py:452
compile_extra, compiler.py:716
_compile_core, dispatcher.py:152
_compile_cached, dispatcher.py:139
compile, dispatcher.py:125
compile, dispatcher.py:965
_compile_for_args, dispatcher.py:420
__init__, pynndescent_.py:891
nearest_neighbors, umap_.py:328
fit, umap_.py:2516
__init__, Top2Vec.py:669
<module>, test1.py:26

Verbose output:

...
UMAP(n_neighbors=10, verbose=True)
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
Wed Sep 28 07:46:37 2022 Construct fuzzy simplicial set
Wed Sep 28 07:46:37 2022 Finding Nearest Neighbors
Wed Sep 28 07:47:11 2022 Building RP forest with 10 trees
Wed Sep 28 07:47:12 2022 NN descent for 14 iterations
Fatal Python error: Segmentation faultFatal Python error: Segmentation fault

Thread 0x0000700011629000 (most recent call first):
  File "/Users/davidlaxer/anaconda3/libFatal Python error: 

/Segmentation faultp

ython3.8/threading.py", line 306 in wait
  File "/Users/davidlaxer/anaconda3/lib/python3.8/threading.py", line 558 in wa
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

Any clues as to what's causing the exception?

@dbl001
Copy link
Author

dbl001 commented Oct 2, 2022

This appears to be a numba/tbb issue causing the exception in: fast_knn_indicies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant