Skip to content

Instantly share code, notes, and snippets.

@maggiemoss
Created October 3, 2025 17:43
Show Gist options
  • Select an option

  • Save maggiemoss/bb31574ac8a59893c9cf52189e67bb2d to your computer and use it in GitHub Desktop.

Select an option

Save maggiemoss/bb31574ac8a59893c9cf52189e67bb2d to your computer and use it in GitHub Desktop.
ERROR Argument `((ParamSpec(_InputT)) -> _RetT) | None` is not assignable to parameter `fn` with type `(...) -> Any` in function `torch._dynamo.eval_frame._NullDecorator.__call__` [bad-argument-type]
--> torch/__init__.py:2656:19
|
2656 | )(model)(*args, **kwargs)
| ^^^^^
|
ERROR Class member `AOTAutogradCachePickler.dispatch_table` overrides parent class `FxGraphCachePickler` in an inconsistent manner [bad-override]
--> torch/_functorch/_aot_autograd/autograd_cache.py:387:14
|
387 | self.dispatch_table: dict
| ^^^^^^^^^^^^^^
|
`AOTAutogradCachePickler.dispatch_table` has type `dict[Unknown, Unknown]`, which is not consistent with `Mapping[type, (Any) -> str | tuple[(...) -> Any, tuple[Any, ...]] | tuple[(...) -> Any, tuple[Any, ...], Any] | tuple[(...) -> Any, tuple[Any, ...], Any, Iterator[Any] | None] | tuple[(...) -> Any, tuple[Any, ...], Any, Iterator[Any] | None, Iterator[Any] | None]]` in `FxGraphCachePickler.dispatch_table` (the type of read-write attributes cannot be changed)
ERROR Object of class `NoneType` has no attribute `memory_format` [missing-attribute]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:89:8
|
89 | if memory_format.memory_format is not None:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Argument `memory_format | object` is not assignable to parameter `memory_format` with type `memory_format` in function `torch._C.TensorBase.contiguous` [bad-argument-type]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:91:44
|
91 | out = out.contiguous(memory_format=memory_format.memory_format)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Object of class `Tensor` has no attribute `__tensor_flatten__` [missing-attribute]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:120:17
|
120 | attrs = out.__tensor_flatten__()[0]
| ^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Object of class `MemoryFormatMeta` has no attribute `append`
Object of class `NoneType` has no attribute `append` [missing-attribute]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:129:13
|
129 | out_memory_format.append(new_elem_memory_format)
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Cannot index into `dict[StorageWeakRef, int]` [index-error]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:495:45
|
495 | base_idx = inp_storage_refs[curr_storage]
| ^^^^^^^^^^^^
|
Argument `StorageWeakRef | None` is not assignable to parameter `key` with type `StorageWeakRef` in function `dict.__getitem__`
ERROR Cannot use `None` as a context manager [bad-context-manager]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:702:18
|
702 | with detect_fake_mode():
| ^^^^^^^^^^^^^^^^^^
|
Object of class `NoneType` has no attribute `__enter__`
ERROR Cannot use `None` as a context manager [bad-context-manager]
--> torch/_functorch/_aot_autograd/collect_metadata_analysis.py:702:18
|
702 | with detect_fake_mode():
| ^^^^^^^^^^^^^^^^^^
|
Object of class `NoneType` has no attribute `__exit__`
ERROR Cannot set item in `dict[DifferentiableAOTInput, tuple[Node, Node | None]]` [unsupported-operation]
--> torch/_functorch/_aot_autograd/fx_utils.py:81:25
|
81 | input_index[desc] = (n, None)
| ^^^^
|
Argument `DifferentiableAOTInput | SubclassGetAttrAOTInput` is not assignable to parameter `key` with type `DifferentiableAOTInput` in function `dict.__setitem__`
ERROR Cannot set item in `dict[DifferentiableAOTOutput, tuple[Node, Node | None]]` [unsupported-operation]
--> torch/_functorch/_aot_autograd/fx_utils.py:132:30
|
132 | output_index[sub_d] = (sub_n, None)
| ^^^^^
|
Argument `DifferentiableAOTOutput | SubclassGetAttrAOTOutput` is not assignable to parameter `key` with type `DifferentiableAOTOutput` in function `dict.__setitem__`
ERROR Argument `AOTInput | list[AOTInput] | list[Unknown] | tuple[list[Unknown], list[Unknown]]` is not assignable to parameter `flat_args_descs` with type `list[AOTInput]` in function `torch._functorch._aot_autograd.collect_metadata_analysis.run_functionalized_fw_and_collect_metadata` [bad-argument-type]
--> torch/_functorch/_aot_autograd/graph_capture_wrappers.py:1308:25
|
1308 | flat_args_descs=primals_unwrapped_descs,
| ^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Type `BackwardState` is not iterable [not-iterable]
--> torch/_functorch/_aot_autograd/graph_capture_wrappers.py:1312:7
|
1312 | )(*primals_unwrapped)
| ^^^^^^^^^^^^^^^^^^
|
ERROR Type `SymInt` is not iterable [not-iterable]
--> torch/_functorch/_aot_autograd/graph_capture_wrappers.py:1312:7
|
1312 | )(*primals_unwrapped)
| ^^^^^^^^^^^^^^^^^^
|
ERROR Type `int` is not iterable [not-iterable]
--> torch/_functorch/_aot_autograd/graph_capture_wrappers.py:1312:7
|
1312 | )(*primals_unwrapped)
| ^^^^^^^^^^^^^^^^^^
|
ERROR `FakeTensor | Tensor | None` is not assignable to `FakeTensor | None` (caused by inconsistent types when breaking cycles) [bad-assignment]
--> torch/_functorch/_aot_autograd/graph_compile.py:428:5
|
428 | / for t in itertools.chain(fw_ins, user_fw_outs, bw_outs):
429 | | # Only access storage if a tensor has storage (not sparse)
430 | | if t is not None and isinstance(t, FakeTensor) and not is_sparse_any(t):
431 | | storage_refs.add(StorageWeakRef(t.untyped_storage()))
| |_________________________________________________________________^
|
ERROR Argument `Unknown | None` is not assignable to parameter `saved_tensors` with type `list[FakeTensor]` in function `collect_fw_donated_buffer_idxs` [bad-argument-type]
--> torch/_functorch/_aot_autograd/graph_compile.py:497:9
|
497 | saved_tensors,
| ^^^^^^^^^^^^^
|
ERROR Argument `int` is not assignable to parameter `stride` with type `Sequence[SymInt | int]` in function `torch._C.TensorBase.as_strided` [bad-argument-type]
--> torch/_functorch/_aot_autograd/graph_compile.py:1765:76
|
1765 | placeholder_list[i] = ph_arg.as_strided(ph_arg.size(), real_stride)
| ^^^^^^^^^^^
|
ERROR Object of class `Tensor` has no attribute `_dynamo_weak_dynamic_indices` [missing-attribute]
--> torch/_functorch/_aot_autograd/runtime_wrappers.py:228:9
|
228 | t._dynamo_weak_dynamic_indices |= dims
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Type `None` is not iterable [not-iterable]
--> torch/_functorch/_aot_autograd/runtime_wrappers.py:1145:39
|
1145 | for inner_idx_or_tuple in synthetic_base_info:
| ^^^^^^^^^^^^^^^^^^^
|
ERROR Class member `CompiledFunction.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/_functorch/_aot_autograd/runtime_wrappers.py:2115:17
|
2115 | def forward(ctx, *deduped_flat_tensor_args):
| ^^^^^^^
|
`CompiledFunction.forward` has type `(ctx: Unknown, *deduped_flat_tensor_args: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Argument `tuple[Generator | Unknown, ...] | Unknown` is not assignable to parameter `args` with type `list[Any] | tuple[Any]` in function `torch._functorch._aot_autograd.utils.call_func_at_runtime_with_args` [bad-argument-type]
--> torch/_functorch/_aot_autograd/runtime_wrappers.py:2151:21
|
2151 | args,
| ^^^^
|
ERROR Class member `CompiledFunctionBackward.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/_functorch/_aot_autograd/runtime_wrappers.py:2346:25
|
2346 | def forward(double_ctx, *unused_args):
| ^^^^^^^
|
`CompiledFunctionBackward.forward` has type `(double_ctx: Unknown, *unused_args: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Attribute `output_code_ty` cannot depend on type variable `TOutputCode`, which is not in the scope of class `SerializableAOTDispatchCompiler` [invalid-type-var]
--> torch/_functorch/_aot_autograd/schemas.py:1234:14
|
1234 | self.output_code_ty = output_code_ty
| ^^^^^^^^^^^^^^
|
ERROR Attribute `compiler_fn` cannot depend on type variable `TOutputCode`, which is not in the scope of class `SerializableAOTDispatchCompiler` [invalid-type-var]
--> torch/_functorch/_aot_autograd/schemas.py:1235:14
|
1235 | self.compiler_fn = compiler_fn
| ^^^^^^^^^^^
|
ERROR No matching overload found for function `list.__init__` [no-matching-overload]
--> torch/_functorch/_aot_autograd/subclass_parametrization.py:93:13
|
93 | list(module.named_buffers(recurse=False)),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
() -> None [closest match]
(iterable: Iterable[tuple[str, Parameter]], /) -> None
ERROR `AOTInput | AOTDescriptor` is not assignable to `AOTDescriptor` (caused by inconsistent types when breaking cycles) [bad-assignment]
--> torch/_functorch/_aot_autograd/subclass_utils.py:235:9
|
235 | / for attr in attrs:
236 | | inner_tensor = getattr(t, attr)
237 | | n_desc: Any = (
238 | | SubclassGetAttrAOTInput(desc, attr)
239 | | if isinstance(desc, AOTInput)
240 | | else SubclassGetAttrAOTOutput(desc, attr)
| |__________________________________________________________^
|
ERROR Argument `AOTDescriptor` is not assignable to parameter `base` with type `AOTOutput` in function `torch._functorch._aot_autograd.descriptors.SubclassGetAttrAOTOutput.__init__` [bad-argument-type]
--> torch/_functorch/_aot_autograd/subclass_utils.py:240:47
|
240 | else SubclassGetAttrAOTOutput(desc, attr)
| ^^^^
|
ERROR Argument `AOTDescriptor | Unknown` is not assignable to parameter `desc` with type `AOTDescriptor` in function `flatten_subclass` [bad-argument-type]
--> torch/_functorch/_aot_autograd/subclass_utils.py:260:50
|
260 | flatten_subclass(typing.cast(Tensor, x), desc, out=(xs_inner, descs_inner))
| ^^^^
|
ERROR Object of class `NoneType` has no attribute `attrs` [missing-attribute]
--> torch/_functorch/_aot_autograd/subclass_utils.py:284:26
|
284 | inner_meta = meta.attrs.get(attr)
| ^^^^^^^^^^
|
ERROR Argument `object` is not assignable to parameter `object` with type `SymInt | Tensor | int` in function `list.append` [bad-argument-type]
--> torch/_functorch/_aot_autograd/subclass_utils.py:313:29
|
313 | xs_inner.append(x)
| ^
|
ERROR Object of class `Sized` has no attribute `append` [missing-attribute]
--> torch/_functorch/_aot_autograd/utils.py:331:25
|
331 | output_token_nodes.append(out)
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Expected *-unpacked _P.args and **-unpacked _P.kwargs [invalid-param-spec]
--> torch/_functorch/_aot_autograd/utils.py:532:17
|
532 | return f(*args, **kwargs)[0]
| ^^^^^^^^^^^^^^^^^
|
ERROR Returned type `_Wrapped[_P, tuple[_T, _S], [*args: Unknown, **kwargs: Unknown], Unknown]` is not assignable to declared return type `(ParamSpec(_P)) -> _T` [bad-return]
--> torch/_functorch/_aot_autograd/utils.py:534:12
|
534 | return inner
| ^^^^^
|
ERROR Class member `ApplyTemplate.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/_functorch/autograd_function.py:756:17
|
756 | def forward(ctx, *args):
| ^^^^^^^
|
`ApplyTemplate.forward` has type `(ctx: Unknown, *args: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR No matching overload found for function `dict.__init__` [no-matching-overload]
--> torch/_functorch/make_functional.py:45:24
|
45 | named_params = dict(named_params)
| ^^^^^^^^^^^^^^
|
Possible overloads:
() -> None [closest match]
(**kwargs: _VT) -> None
(map: SupportsKeysAndGetItem[tuple[str, Tensor], _VT], /) -> None
(map: SupportsKeysAndGetItem[str, _VT], /, **kwargs: _VT) -> None
(iterable: Iterable[tuple[tuple[str, Tensor], _VT]], /) -> None
(iterable: Iterable[tuple[str, _VT]], /, **kwargs: _VT) -> None
(iterable: Iterable[list[str]], /) -> None
(iterable: Iterable[list[bytes]], /) -> None
ERROR No matching overload found for function `dict.__init__` [no-matching-overload]
--> torch/_functorch/make_functional.py:46:29
|
46 | tied_named_params = dict(tied_named_params)
| ^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
() -> None [closest match]
(**kwargs: _VT) -> None
(map: SupportsKeysAndGetItem[tuple[str, Tensor], _VT], /) -> None
(map: SupportsKeysAndGetItem[str, _VT], /, **kwargs: _VT) -> None
(iterable: Iterable[tuple[tuple[str, Tensor], _VT]], /) -> None
(iterable: Iterable[tuple[str, _VT]], /, **kwargs: _VT) -> None
(iterable: Iterable[list[str]], /) -> None
(iterable: Iterable[list[bytes]], /) -> None
ERROR Cannot set item in `dict[Tensor, tuple[str, list[str]]]` [unsupported-operation]
--> torch/_functorch/make_functional.py:54:37
|
54 | tensor_to_mapping[tensor] = (key, [])
| ^^^^^^^^^
|
Argument `tuple[tuple[str, Tensor], list[str]]` is not assignable to parameter `value` with type `tuple[str, list[str]]` in function `dict.__setitem__`
ERROR Argument `tuple[str, Tensor]` is not assignable to parameter `object` with type `str` in function `list.append` [bad-argument-type]
--> torch/_functorch/make_functional.py:57:45
|
57 | tensor_to_mapping[tensor][1].append(key)
| ^^^
|
ERROR `list[Node]` is not assignable to variable `insertable_nodes` with type `OrderedSet[Node]` [bad-assignment]
--> torch/_functorch/partitioners.py:1177:28
|
1177 | insertable_nodes = sorted(insertable_nodes, key=lambda n: order[n])
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Argument `list[_HasMeta]` is not assignable to parameter `saved_sym_nodes` with type `list[Node]` in function `_extract_fwd_bwd_modules` [bad-argument-type]
--> torch/_functorch/partitioners.py:2852:25
|
2852 | saved_sym_nodes=saved_sym_nodes,
| ^^^^^^^^^^^^^^^
|
ERROR Class member `VmapInterpreter._cptr` overrides parent class `FuncTorchInterpreter` in an inconsistent manner [bad-override]
--> torch/_functorch/pyfunctorch.py:134:9
|
134 | def _cptr(self):
| ^^^^^
|
`VmapInterpreter._cptr` is a property, but `FuncTorchInterpreter._cptr` is not
ERROR Class member `GradInterpreter._cptr` overrides parent class `FuncTorchInterpreter` in an inconsistent manner [bad-override]
--> torch/_functorch/pyfunctorch.py:173:9
|
173 | def _cptr(self):
| ^^^^^
|
`GradInterpreter._cptr` is a property, but `FuncTorchInterpreter._cptr` is not
ERROR Class member `JvpInterpreter._cptr` overrides parent class `FuncTorchInterpreter` in an inconsistent manner [bad-override]
--> torch/_functorch/pyfunctorch.py:210:9
|
210 | def _cptr(self):
| ^^^^^
|
`JvpInterpreter._cptr` is a property, but `FuncTorchInterpreter._cptr` is not
ERROR Class member `FunctionalizeInterpreter._cptr` overrides parent class `FuncTorchInterpreter` in an inconsistent manner [bad-override]
--> torch/_functorch/pyfunctorch.py:246:9
|
246 | def _cptr(self):
| ^^^^^
|
`FunctionalizeInterpreter._cptr` is a property, but `FuncTorchInterpreter._cptr` is not
ERROR Argument `Unknown | None` is not assignable to parameter `fx_node` with type `Node` in function `_DeconstructedSymNode.__init__` [bad-argument-type]
--> torch/_subclasses/_fake_tensor_utils.py:36:65
|
36 | node._expr, node.pytype, node._hint, node.constant, node.fx_node
| ^^^^^^^^^^^^
|
ERROR Argument `object` is not assignable to parameter `elem` with type `Tensor` in function `FakeTensor.__new__` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:407:21
|
407 | make_meta_t(),
| ^^^^^^^^^^^^^
|
ERROR Argument `device | str` is not assignable to parameter `device` with type `device` in function `FakeTensor.__new__` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:408:21
|
408 | device,
| ^^^^^^
|
ERROR Class member `FakeTensor.device` overrides parent class `Tensor` in an inconsistent manner [bad-override]
--> torch/_subclasses/fake_tensor.py:682:9
|
682 | def device(self) -> torch.device:
| ^^^^^^
|
`FakeTensor.device` and `Tensor.device` must both be descriptors
ERROR Class member `FakeTensor.names` overrides parent class `Tensor` in an inconsistent manner [bad-override]
--> torch/_subclasses/fake_tensor.py:709:9
|
709 | def names(self) -> list[str]:
| ^^^^^
|
`FakeTensor.names` is a property, but `Tensor.names` is not
ERROR Attribute `fake_device` of class `FakeTensor` is a read-only descriptor with no `__set__` and cannot be set [read-only]
--> torch/_subclasses/fake_tensor.py:767:9
|
767 | self.fake_device = device
| ^^^^^^^^^^^^^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._dispatch_impl` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1496:40
|
1496 | return self._dispatch_impl(func, types, args, kwargs)
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._dispatch_impl` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1513:44
|
1513 | return self._dispatch_impl(func, types, args, kwargs)
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._output_from_cache_entry` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1516:71
|
1516 | output = self._output_from_cache_entry(state, entry, key, func, args)
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._crosscheck_cache_output` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1522:59
|
1522 | self._crosscheck_cache_output(output, func, types, args, kwargs)
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._dispatch_impl` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1526:38
|
1526 | output = self._dispatch_impl(func, types, args, kwargs)
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._validate_cache_key` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1529:38
|
1529 | self._validate_cache_key(func, args, kwargs)
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._make_cache_entry` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1548:56
|
1548 | entry = self._make_cache_entry(state, key, func, args, kwargs, output)
| ^^^^
|
ERROR Argument `int` is not assignable to parameter `object` with type `OpOverload[Ellipsis, Any] | ShapeEnvSettings | bool | dtype | str | None` in function `list.append` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1590:31
|
1590 | key_values.append(self.epoch)
| ^^^^^^^^^^
|
ERROR Argument `list[OpOverload[Ellipsis, Any] | ShapeEnvSettings | bool | dtype | str | None]` is not assignable to parameter `result` with type `list[object]` in function `FakeTensorMode._prep_args_for_hash` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1595:38
|
1595 | self._prep_args_for_hash(key_values, args, state, id_hashed_objects)
| ^^^^^^^^^^
|
ERROR Argument `list[OpOverload[Ellipsis, Any] | ShapeEnvSettings | bool | dtype | str | None]` is not assignable to parameter `result` with type `list[object]` in function `FakeTensorMode._prep_args_for_hash` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1597:38
|
1597 | self._prep_args_for_hash(key_values, kwargs, state, id_hashed_objects)
| ^^^^^^^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._validate_output_for_cache_entry` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1912:33
|
1912 | state, key, func, args, kwargs, out_element
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._validate_output_for_cache_entry` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1916:29
|
1916 | state, key, func, args, kwargs, output
| ^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._get_output_info_for_cache_entry` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1922:33
|
1922 | state, key, func, args, kwargs, out_elem
| ^^^^
|
ERROR Argument `tuple[_DispatchCacheEntryOutputInfo, ...]` is not assignable to parameter `output_infos` with type `tuple[_DispatchCacheEntryOutputInfo]` in function `_DispatchCacheValidEntry.__init__` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1927:30
|
1927 | output_infos=tuple(output_infos), is_output_tuple=True
| ^^^^^^^^^^^^^^^^^^^
|
ERROR Argument `HigherOrderOperator | OpOverload[Ellipsis, Any]` is not assignable to parameter `func` with type `OpOverload[Ellipsis, Any]` in function `FakeTensorMode._get_output_info_for_cache_entry` [bad-argument-type]
--> torch/_subclasses/fake_tensor.py:1932:29
|
1932 | state, key, func, args, kwargs, output
| ^^^^
|
ERROR Cannot index into `dict[OpOverload[Ellipsis, Any], (...) -> Unknown]` [index-error]
--> torch/_subclasses/fake_tensor.py:2475:48
|
2475 | return registered_hop_fake_fns[func](*args, **kwargs)
| ^^^^
|
Argument `HigherOrderOperator` is not assignable to parameter `key` with type `OpOverload[Ellipsis, Any]` in function `dict.__getitem__`
ERROR Returned type `Tensor | T | Unknown` is not assignable to declared return type `T` [bad-return]
--> torch/_subclasses/fake_tensor.py:2628:20
|
2628 | return fake_out
| ^^^^^^^^
|
ERROR Returned type `Tensor` is not assignable to declared return type `FakeTensor | T` [bad-return]
--> torch/_subclasses/fake_tensor.py:2909:24
|
2909 | return e
| ^
|
ERROR Returned type `MetaTensorDesc[Unknown] | Tensor | None` is not assignable to declared return type `_TensorLikeT | None` [bad-return]
--> torch/_subclasses/meta_utils.py:84:16
|
84 | return t.grad
| ^^^^^^
|
ERROR Argument `SymInt | int` is not assignable to parameter `storage_offset` with type `int` in function `MetaTensorDesc.__init__` [bad-argument-type]
--> torch/_subclasses/meta_utils.py:418:28
|
418 | storage_offset=storage_offset,
| ^^^^^^^^^^^^^^
|
ERROR Argument `Tensor` is not assignable to parameter `self` with type `FakeTensor` in function `torch._C.TensorBase._view_func_unsafe` [bad-argument-type]
--> torch/_subclasses/meta_utils.py:542:13
|
542 | t, new_base, symint_visitor_fn, tensor_visitor_fn
| ^
|
ERROR `tuple[int, ...] | None` is not assignable to variable `outer_stride` with type `tuple[int, ...]` [bad-assignment]
--> torch/_subclasses/meta_utils.py:1016:28
|
1016 | outer_stride = outer_stride if outer_stride is not None else t.stride
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Returned type `None` is not assignable to declared return type `Tensor` [bad-return]
--> torch/_subclasses/meta_utils.py:1272:32
|
1272 | return None
| ^^^^
|
ERROR Argument `FakeTensor | Unknown` is not assignable to parameter `t` with type `_TensorT` in function `MetaConverter._backward_error` [bad-argument-type]
--> torch/_subclasses/meta_utils.py:1402:50
|
1402 | r = self._backward_error(r)
| ^
|
ERROR Argument `FakeTensor | Unknown` is not assignable to parameter `t` with type `_TensorT` in function `MetaConverter._backward_error` [bad-argument-type]
--> torch/_subclasses/meta_utils.py:1440:50
|
1440 | r = self._backward_error(r)
| ^
|
ERROR Returned type `_TensorT | Unknown` is not assignable to declared return type `_TensorT` [bad-return]
--> torch/_subclasses/meta_utils.py:1536:32
|
1536 | return r
| ^
|
ERROR Returned type `_NotImplementedType` is not assignable to declared return type `_TensorT` [bad-return]
--> torch/_subclasses/meta_utils.py:1685:32
|
1685 | return NotImplemented
| ^^^^^^^^^^^^^^
|
ERROR Argument `FakeTensor | _TensorT | Unknown` is not assignable to parameter `t` with type `_TensorT` in function `MetaConverter._backward_error` [bad-argument-type]
--> torch/_subclasses/meta_utils.py:1731:54
|
1731 | ... r = self._backward_error(r)
| ^
|
ERROR Argument `FakeTensor | _TensorT | Unknown` is not assignable to parameter `v` with type `_TensorT` in function `MetaConverter.set_tensor_memo` [bad-argument-type]
--> torch/_subclasses/meta_utils.py:1842:37
|
1842 | self.set_tensor_memo(t, r)
| ^
|
ERROR Returned type `_NotImplementedType` is not assignable to declared return type `_TensorT` [bad-return]
--> torch/_subclasses/meta_utils.py:1885:24
|
1885 | return NotImplemented
| ^^^^^^^^^^^^^^
|
ERROR Returned type `_NotImplementedType` is not assignable to declared return type `_TensorT` [bad-return]
--> torch/_subclasses/meta_utils.py:1890:20
|
1890 | return NotImplemented
| ^^^^^^^^^^^^^^
|
ERROR No matching overload found for function `zip.__new__` [no-matching-overload]
--> torch/autograd/__init__.py:95:25
|
95 | for out, grad in zip(outputs, grads):
| ^^^^^^^^^^^^^^^^
|
Possible overloads:
(cls: type[zip[_T_co]], *, strict: bool = ...) -> zip[Any] [closest match]
(cls: type[zip[_T_co]], iter1: Iterable[_T1], /, *, strict: bool = ...) -> zip[tuple[_T1]]
(cls: type[zip[_T_co]], iter1: Iterable[_T1], iter2: Iterable[_T2], /, *, strict: bool = ...) -> zip[tuple[_T1, _T2]]
(cls: type[zip[_T_co]], iter1: Iterable[_T1], iter2: Iterable[_T2], iter3: Iterable[_T3], /, *, strict: bool = ...) -> zip[tuple[_T1, _T2, _T3]]
(cls: type[zip[_T_co]], iter1: Iterable[_T1], iter2: Iterable[_T2], iter3: Iterable[_T3], iter4: Iterable[_T4], /, *, strict: bool = ...) -> zip[tuple[_T1, _T2, _T3, _T4]]
(cls: type[zip[_T_co]], iter1: Iterable[_T1], iter2: Iterable[_T2], iter3: Iterable[_T3], iter4: Iterable[_T4], iter5: Iterable[_T5], /, *, strict: bool = ...) -> zip[tuple[_T1, _T2, _T3, _T4, _T5]]
(cls: type[zip[_T_co]], iter1: Iterable[Any], iter2: Iterable[Any], iter3: Iterable[Any], iter4: Iterable[Any], iter5: Iterable[Any], iter6: Iterable[Any], /, *iterables: Iterable[Any], *, strict: bool = ...) -> zip[tuple[Any, ...]]
ERROR Argument `Sequence[GradientEdge] | Sequence[Tensor] | Tensor` is not assignable to parameter `iterable` with type `Iterable[GradientEdge]` in function `tuple.__new__` [bad-argument-type]
--> torch/autograd/__init__.py:344:25
|
344 | tensors = tuple(tensors)
| ^^^^^^^
|
ERROR Argument `Sequence[GradientEdge] | Sequence[Tensor] | Tensor` is not assignable to parameter `iterable` with type `Iterable[GradientEdge]` in function `tuple.__new__` [bad-argument-type]
--> torch/autograd/__init__.py:443:25
|
443 | outputs = tuple(outputs)
| ^^^^^^^
|
ERROR Argument `Sequence[GradientEdge] | Sequence[Tensor] | Tensor` is not assignable to parameter `iterable` with type `Iterable[GradientEdge]` in function `tuple.__new__` [bad-argument-type]
--> torch/autograd/__init__.py:447:24
|
447 | inputs = tuple(inputs)
| ^^^^^^
|
ERROR Class member `Type.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/autograd/_functions/tensor.py:18:9
|
18 | def forward(ctx, i, dest_type):
| ^^^^^^^
|
`Type.forward` has type `(ctx: Unknown, i: Unknown, dest_type: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `Type.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/autograd/_functions/tensor.py:24:9
|
24 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`Type.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Class member `Resize.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/autograd/_functions/tensor.py:35:9
|
35 | def forward(ctx, tensor, sizes):
| ^^^^^^^
|
`Resize.forward` has type `(ctx: Unknown, tensor: Unknown, sizes: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `Resize.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/autograd/_functions/tensor.py:63:9
|
63 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`Resize.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
WARN `vmap` is deprecated [deprecated]
--> torch/autograd/gradcheck.py:12:42
|
12 | from torch._vmap_internals import _vmap, vmap
| ----
|
ERROR Argument `Node | _FunctionBase` is not assignable to parameter `node` with type `Node` in function `GradientEdge.__new__` [bad-argument-type]
--> torch/autograd/graph.py:232:25
|
232 | return GradientEdge(grad_fn, tensor.output_nr, ownership_token=token)
| ^^^^^^^
|
ERROR Cannot set item in `dict[int, list[Tensor | None]]` [unsupported-operation]
--> torch/autograd/graph.py:534:30
|
534 | buffer[id] = buffer.get(id, [None] * len_tensors)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Argument `list[Tensor | None] | list[None]` is not assignable to parameter `value` with type `list[Tensor | None]` in function `dict.__setitem__`
ERROR Invalid base class: `ContextDecorator | _ContextDecorator` [invalid-inheritance]
--> torch/autograd/profiler.py:734:23
|
734 | class record_function(_ContextDecorator):
| ^^^^^^^^^^^^^^^^^
|
ERROR Expected a type form, got instance of `Literal['torch.classes.profiler._RecordFunction']` [not-a-type]
--> torch/autograd/profiler.py:781:22
|
781 | Optional["torch.classes.profiler._RecordFunction"], None
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR `EventList` is not assignable to attribute `function_events` with type `None` [bad-assignment]
--> torch/autograd/profiler_legacy.py:104:32
|
104 | self.function_events = EventList(
| ________________________________^
105 | | parsed_results,
106 | | use_device="cuda" if self.use_cuda else None,
107 | | profile_memory=self.profile_memory,
108 | | with_flops=self.with_flops,
109 | | )
| |_________^
|
ERROR Object of class `NoneType` has no attribute `_build_tree` [missing-attribute]
--> torch/autograd/profiler_legacy.py:110:9
|
110 | self.function_events._build_tree()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR `Sized | Self@EventList` is not assignable to `Self@EventList` (caused by inconsistent types when breaking cycles) [bad-assignment]
--> torch/autograd/profiler_util.py:51:13
|
51 | / for idx in range(len(self)):
52 | | if (
53 | | self[idx].cpu_parent is not None
54 | | and self[idx].cpu_parent.name == self[idx].name
55 | | and len(self[idx].cpu_parent.cpu_children) == 1
56 | | ):
| |___________________^
|
ERROR Cannot index into `Sized` [index-error]
--> torch/autograd/profiler_util.py:53:21
|
53 | self[idx].cpu_parent is not None
| ^^^^^^^^^
|
Object of class `Sized` has no attribute `__getitem__`
ERROR Cannot index into `Sized` [index-error]
--> torch/autograd/profiler_util.py:54:25
|
54 | and self[idx].cpu_parent.name == self[idx].name
| ^^^^^^^^^
|
Object of class `Sized` has no attribute `__getitem__`
ERROR Cannot index into `Sized` [index-error]
--> torch/autograd/profiler_util.py:54:54
|
54 | and self[idx].cpu_parent.name == self[idx].name
| ^^^^^^^^^
|
Object of class `Sized` has no attribute `__getitem__`
ERROR Cannot index into `Sized` [index-error]
--> torch/autograd/profiler_util.py:55:29
|
55 | and len(self[idx].cpu_parent.cpu_children) == 1
| ^^^^^^^^^
|
Object of class `Sized` has no attribute `__getitem__`
ERROR Argument `Sized | Self@EventList` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `enumerate.__new__` [bad-argument-type]
--> torch/autograd/profiler_util.py:64:53
|
64 | new_evts = [ev for ind, ev in enumerate(self) if ind not in to_delete]
| ^^^^
|
ERROR Object of class `Sized` has no attribute `clear` [missing-attribute]
--> torch/autograd/profiler_util.py:65:13
|
65 | self.clear()
| ^^^^^^^^^^
|
ERROR Object of class `Sized` has no attribute `extend` [missing-attribute]
--> torch/autograd/profiler_util.py:66:13
|
66 | self.extend(new_evts)
| ^^^^^^^^^^^
|
ERROR `Unknown | None` is not assignable to attribute `overload_name` with type `str` [bad-assignment]
--> torch/autograd/profiler_util.py:499:35
|
499 | self.overload_name: str = overload_name
| ^^^^^^^^^^^^^
|
ERROR `Unknown | None` is not assignable to attribute `trace_name` with type `str` [bad-assignment]
--> torch/autograd/profiler_util.py:500:32
|
500 | self.trace_name: str = trace_name
| ^^^^^^^^^^
|
ERROR `Unknown | None` is not assignable to attribute `input_shapes` with type `tuple[int, ...]` [bad-assignment]
--> torch/autograd/profiler_util.py:508:46
|
508 | self.input_shapes: tuple[int, ...] = input_shapes
| ^^^^^^^^^^^^
|
ERROR `Unknown | None` is not assignable to attribute `concrete_inputs` with type `list[Any]` [bad-assignment]
--> torch/autograd/profiler_util.py:509:43
|
509 | self.concrete_inputs: list[Any] = concrete_inputs
| ^^^^^^^^^^^^^^^
|
ERROR `Unknown | None` is not assignable to attribute `kwinputs` with type `dict[str, Any]` [bad-assignment]
--> torch/autograd/profiler_util.py:510:41
|
510 | self.kwinputs: dict[str, Any] = kwinputs
| ^^^^^^^^
|
ERROR `Unknown | None` is not assignable to attribute `stack` with type `list[Unknown]` [bad-assignment]
--> torch/autograd/profiler_util.py:511:28
|
511 | self.stack: list = stack
| ^^^^^
|
ERROR `int | None` is not assignable to attribute `flops` with type `Never` [bad-assignment]
--> torch/autograd/profiler_util.py:735:26
|
735 | self.flops = other.flops
| ^^^^^^^^^^^
|
ERROR No matching overload found for function `max` [no-matching-overload]
--> torch/autograd/profiler_util.py:970:24
|
970 | log_flops = max(0, min(math.log10(flops) / 3, float(len(flop_headers) - 1)))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(arg1: SupportsRichComparisonT, arg2: SupportsRichComparisonT, /, *_args: SupportsRichComparisonT, *, key: None = None) -> SupportsRichComparisonT [closest match]
(arg1: _T, arg2: _T, /, *_args: _T, *, key: (_T) -> SupportsDunderGT[Any] | SupportsDunderLT[Any]) -> _T
(iterable: Iterable[SupportsRichComparisonT], /, *, key: None = None) -> SupportsRichComparisonT
(iterable: Iterable[_T], /, *, key: (_T) -> SupportsDunderGT[Any] | SupportsDunderLT[Any]) -> _T
(iterable: Iterable[SupportsRichComparisonT], /, *, key: None = None, default: _T) -> SupportsRichComparisonT | _T
(iterable: Iterable[_T1], /, *, key: (_T1) -> SupportsDunderGT[Any] | SupportsDunderLT[Any], default: _T2) -> _T1 | _T2
ERROR Object of class `NoneType` has no attribute `cudaGetErrorString` [missing-attribute]
--> torch/cuda/__init__.py:499:15
|
499 | msg = _cudart.cudaGetErrorString(_cudart.cudaError(code))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Object of class `NoneType` has no attribute `cudaError` [missing-attribute]
--> torch/cuda/__init__.py:499:42
|
499 | msg = _cudart.cudaGetErrorString(_cudart.cudaError(code))
| ^^^^^^^^^^^^^^^^^
|
ERROR Object of class `NoneType` has no attribute `cudaError` [missing-attribute]
--> torch/cuda/__init__.py:505:15
|
505 | if res != _cudart.cudaError.success:
| ^^^^^^^^^^^^^^^^^
|
ERROR Expected a type form, got instance of `type[_CudaDeviceProperties] | type` [not-a-type]
--> torch/cuda/__init__.py:604:55
|
604 | def get_device_properties(device: "Device" = None) -> _CudaDeviceProperties:
| ^^^^^^^^^^^^^^^^^^^^^
|
ERROR `Literal[-1]` is not assignable to attribute `idx` with type `Never` [bad-assignment]
--> torch/cuda/__init__.py:654:28
|
654 | self.idx = -1
| ^^
|
ERROR Argument `list[int] | list[str]` is not assignable to parameter `iterable` with type `Iterable[int]` in function `enumerate.__new__` [bad-argument-type]
--> torch/cuda/__init__.py:956:39
|
956 | for idx, val in enumerate(visible_devices):
| ^^^^^^^^^^^^^^^
|
WARN Redundant cast: `int` is the same type as `int` [redundant-cast]
--> torch/cuda/__init__.py:957:24
|
957 | if cast(int, val) >= raw_cnt:
| ----------
|
ERROR Argument `list[int] | list[str]` is not assignable to parameter `iterable` with type `Iterable[int]` in function `enumerate.__new__` [bad-argument-type]
--> torch/cuda/__init__.py:990:39
|
990 | for idx, val in enumerate(visible_devices):
| ^^^^^^^^^^^^^^^
|
WARN Redundant cast: `int` is the same type as `int` [redundant-cast]
--> torch/cuda/__init__.py:991:24
|
991 | if cast(int, val) >= raw_cnt:
| ----------
|
ERROR Expression `_PYNVML_ERR` has type `ImportError | None` which does not derive from BaseException [invalid-inheritance]
--> torch/cuda/__init__.py:1206:16
|
1206 | ) from _PYNVML_ERR
| ^^^^^^^^^^^
|
ERROR Could not find import of `pynvml` [import-error]
--> torch/cuda/__init__.py:1207:5
|
1207 | from pynvml import NVMLError_DriverNotLoaded
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Looked in these locations (from config in `/Users/maggiemoss/python_projects/pytorch/pyrefly.toml`):
Import root (inferred from project layout): "/Users/maggiemoss/python_projects/pytorch"
Site package path queried from interpreter: ["/Users/maggiemoss/.pyenv/versions/3.12.0/lib/python3.12", "/Users/maggiemoss/.pyenv/versions/3.12.0/lib/python3.12/lib-dynload", "/Users/maggiemoss/python_projects/pytorch/venv/lib/python3.12/site-packages", "/Users/maggiemoss/python_projects/pytorch"]
ERROR Expression `_PYNVML_ERR` has type `ImportError | None` which does not derive from BaseException [invalid-inheritance]
--> torch/cuda/__init__.py:1223:16
|
1223 | ) from _PYNVML_ERR
| ^^^^^^^^^^^
|
WARN `memory_cached` is deprecated [deprecated]
--> torch/cuda/__init__.py:1486:21
|
1486 | from .memory import * # noqa: F403
| -
|
WARN `max_memory_cached` is deprecated [deprecated]
--> torch/cuda/__init__.py:1486:21
|
1486 | from .memory import * # noqa: F403
| -
|
ERROR Expected a callable, got `None` [not-callable]
--> torch/cuda/__init__.py:1702:16
|
1702 | return bsr_dense_mm(*args, skip_checks=True, **kwargs)
| ^^^^^^^^^^^^
|
ERROR Could not import `_get_gpu_runtime_library` from `torch.cuda._utils` [missing-module-attribute]
--> torch/cuda/_utils.py:282:39
|
282 | from torch.cuda._utils import _get_gpu_runtime_library
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
WARN `autocast` is deprecated [deprecated]
--> torch/cuda/amp/__init__.py:1:28
|
1 | from .autocast_mode import autocast, custom_bwd, custom_fwd
| --------
|
WARN `custom_bwd` is deprecated [deprecated]
--> torch/cuda/amp/__init__.py:1:38
|
1 | from .autocast_mode import autocast, custom_bwd, custom_fwd
| ----------
|
WARN `custom_fwd` is deprecated [deprecated]
--> torch/cuda/amp/__init__.py:1:50
|
1 | from .autocast_mode import autocast, custom_bwd, custom_fwd
| ----------
|
ERROR Multiple values for argument `capture_error_mode` in function `CUDAGraph.capture_begin` [bad-keyword-argument]
--> torch/cuda/graphs.py:262:13
|
262 | capture_error_mode=self.capture_error_mode,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Class member `Graphed.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/cuda/graphs.py:527:17
|
527 | def forward(ctx: object, *inputs: Tensor) -> tuple[Tensor, ...]:
| ^^^^^^^
|
`Graphed.forward` has type `(ctx: object, *inputs: Tensor) -> tuple[Tensor, ...]`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `Graphed.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/cuda/graphs.py:538:17
|
538 | def backward(ctx: object, *grads: Tensor) -> tuple[Tensor, ...]:
| ^^^^^^^^
|
`Graphed.backward` has type `(object, *grads: Tensor) -> tuple[Tensor, ...]`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Argument `Generator[Unknown | None, None, None]` is not assignable to parameter `iterable` with type `Iterable[Tensor]` in function `tuple.__new__` [bad-argument-type]
--> torch/cuda/graphs.py:551:21
|
551 | b.detach() if b is not None else b for b in static_grad_inputs
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Could not find import of `pynvml` [import-error]
--> torch/cuda/memory.py:773:9
|
773 | from pynvml import NVMLError_DriverNotLoaded
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Looked in these locations (from config in `/Users/maggiemoss/python_projects/pytorch/pyrefly.toml`):
Import root (inferred from project layout): "/Users/maggiemoss/python_projects/pytorch"
Site package path queried from interpreter: ["/Users/maggiemoss/.pyenv/versions/3.12.0/lib/python3.12", "/Users/maggiemoss/.pyenv/versions/3.12.0/lib/python3.12/lib-dynload", "/Users/maggiemoss/python_projects/pytorch/venv/lib/python3.12/site-packages", "/Users/maggiemoss/python_projects/pytorch"]
ERROR Argument `int | Unknown` is not assignable to parameter `record_context_cpp` with type `bool` in function `torch._C._cuda_record_memory_history_legacy` [bad-argument-type]
--> torch/cuda/memory.py:855:9
|
855 | trace_alloc_max_entries,
| ^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Object of class `_NVTXStub` has no attribute `rangeStartA` [missing-attribute]
--> torch/cuda/nvtx.py:56:12
|
56 | return _nvtx.rangeStartA(msg)
| ^^^^^^^^^^^^^^^^^
|
ERROR Object of class `_NVTXStub` has no attribute `rangeEnd` [missing-attribute]
--> torch/cuda/nvtx.py:66:5
|
66 | _nvtx.rangeEnd(range_id)
| ^^^^^^^^^^^^^^
|
ERROR Object of class `_NVTXStub` has no attribute `deviceRangeStart` [missing-attribute]
--> torch/cuda/nvtx.py:86:12
|
86 | return _nvtx.deviceRangeStart(msg, stream)
| ^^^^^^^^^^^^^^^^^^^^^^
|
ERROR Object of class `_NVTXStub` has no attribute `deviceRangeEnd` [missing-attribute]
--> torch/cuda/nvtx.py:98:5
|
98 | _nvtx.deviceRangeEnd(range_handle, stream)
| ^^^^^^^^^^^^^^^^^^^^
|
ERROR No matching overload found for function `os.fspath` [no-matching-overload]
--> torch/export/__init__.py:439:22
|
439 | f = os.fspath(f)
| ^^^
|
Possible overloads:
(path: str) -> str [closest match]
(path: bytes) -> bytes
(path: PathLike[AnyStr]) -> AnyStr
ERROR `Literal[True]` is not assignable to attribute `GET_DTRACE_STRUCTURED` with type `Literal[False]` [bad-assignment]
--> torch/export/_draft_export.py:298:58
|
298 | torch._logging._internal.GET_DTRACE_STRUCTURED = True
| ^^^^
|
ERROR `bool` is not assignable to attribute `GET_DTRACE_STRUCTURED` with type `Literal[False]` [bad-assignment]
--> torch/export/_draft_export.py:305:58
|
305 | torch._logging._internal.GET_DTRACE_STRUCTURED = self.prev_get_dtrace
| ^^^^^^^^^^^^^^^^^^^^
|
ERROR Object of class `str` has no attribute `op` [missing-attribute]
--> torch/export/_swap.py:110:13
|
110 | arg.op == "call_function"
| ^^^^^^
|
ERROR Object of class `str` has no attribute `target` [missing-attribute]
--> torch/export/_swap.py:111:17
|
111 | and arg.target == operator.getitem
| ^^^^^^^^^^
|
ERROR Object of class `str` has no attribute `args` [missing-attribute]
--> torch/export/_swap.py:112:17
|
112 | and arg.args[1] == i
| ^^^^^^^^
|
ERROR `Literal[False]` is not assignable to attribute `decompose_custom_triton_ops` with type `Literal[True]` [bad-assignment]
--> torch/export/_trace.py:188:63
|
188 | torch._functorch.config.decompose_custom_triton_ops = False
| ^^^^^
|
ERROR Argument `str` is not assignable to parameter `object` with type `LiteralString` in function `list.append` [bad-argument-type]
--> torch/export/_trace.py:368:46
|
368 | ... parts.append(str(idx))
| ^^^^^^^^
|
ERROR Argument `Sized | str` is not assignable to parameter `prefix` with type `str | tuple[str, ...]` in function `str.startswith` [bad-argument-type]
--> torch/export/_trace.py:663:41
|
663 | if spec.arg.name.startswith(buffer_prefix): # map from buffer to constants
| ^^^^^^^^^^^^^
|
ERROR Expected *-unpacked _InputT.args and **-unpacked _InputT.kwargs [invalid-param-spec]
--> torch/export/experimental/__init__.py:296:43
|
296 | dynamic_shapes = v(*args, **kwargs)
| ^^^^^^^^^^^^^^^^^
|
ERROR Returned type `OptimizedModule | _Wrapped[_InputT, _RetT, [*args: Unknown, **kwargs: Unknown], Unknown]` is not assignable to declared return type `(ParamSpec(_InputT)) -> _RetT` [bad-return]
--> torch/export/experimental/__init__.py:343:16
|
343 | return _exporter_context
| ^^^^^^^^^^^^^^^^^
|
ERROR Cannot set item in `list[Weights | str]` [unsupported-operation]
--> torch/export/experimental/__init__.py:379:13
|
379 | aoti_files_map[name] = aoti_files
| ^^^^^^^^^^^^^^^^^^^^
|
No matching overload found for function `list.__setitem__`
Possible overloads:
(key: SupportsIndex, value: Weights | str, /) -> None [closest match]
(key: slice[Any, Any, Any], value: Iterable[Weights | str], /) -> None
ERROR Cannot set item in `dict[str, list[Weights | str]]` [unsupported-operation]
--> torch/export/experimental/__init__.py:379:36
|
379 | aoti_files_map[name] = aoti_files
| ^^^^^^^^^^
|
Argument `GraphModule | list[Weights | str] | str` is not assignable to parameter `value` with type `list[Weights | str]` in function `dict.__setitem__`
ERROR Object of class `NoneType` has no attribute `modified` [missing-attribute]
--> torch/export/exported_program.py:1503:56
|
1503 | if transformed_gm is self.graph_module and not res.modified:
| ^^^^^^^^^^^^
|
ERROR Object of class `NoneType` has no attribute `graph_module` [missing-attribute]
--> torch/export/exported_program.py:1581:49
|
1581 | transformed_ep.graph_module.meta.update(res.graph_module.meta)
| ^^^^^^^^^^^^^^^^
|
ERROR Cannot set item in `list[device]` [unsupported-operation]
--> torch/export/passes/__init__.py:84:21
|
84 | args[1] = _get_new_device(args[1], location)
| ^^^^^^^
|
No matching overload found for function `list.__setitem__`
Possible overloads:
(key: SupportsIndex, value: device, /) -> None [closest match]
(key: slice[Any, Any, Any], value: Iterable[device], /) -> None
ERROR No matching overload found for function `posixpath.relpath` [no-matching-overload]
--> torch/export/pt2_archive/_package.py:175:39
|
175 | filename = os.path.relpath(file_path, folder_dir)
| ^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(path: LiteralString, start: LiteralString | None = None) -> LiteralString
(path: PathLike[bytes] | bytes, start: PathLike[bytes] | bytes | None = None) -> bytes
(path: PathLike[str] | str, start: PathLike[str] | str | None = None) -> str [closest match]
ERROR Argument `PathLike[bytes] | PathLike[str] | bytes | int | str` is not assignable to parameter `file_path` with type `str` in function `PT2ArchiveWriter.write_file` [bad-argument-type]
--> torch/export/pt2_archive/_package.py:177:43
|
177 | self.write_file(archive_path, file_path)
| ^^^^^^^^^
|
ERROR No matching overload found for function `os.fspath` [no-matching-overload]
--> torch/export/pt2_archive/_package.py:596:60
|
596 | or (isinstance(f, (str, os.PathLike)) and os.fspath(f).endswith(".pt2"))
| ^^^
|
Possible overloads:
(path: str) -> str [closest match]
(path: bytes) -> bytes
(path: PathLike[AnyStr]) -> AnyStr
ERROR No matching overload found for function `os.fspath` [no-matching-overload]
--> torch/export/pt2_archive/_package.py:607:22
|
607 | f = os.fspath(f)
| ^^^
|
Possible overloads:
(path: str) -> str [closest match]
(path: bytes) -> bytes
(path: PathLike[AnyStr]) -> AnyStr
ERROR Argument `IO[bytes] | IOBase | _TemporaryFileWrapper[bytes] | Unknown` is not assignable to parameter `archive_path_or_buffer` with type `IO[bytes] | PathLike[str] | str` in function `PT2ArchiveWriter.__init__` [bad-argument-type]
--> torch/export/pt2_archive/_package.py:609:27
|
609 | with PT2ArchiveWriter(f) as archive_writer:
| ^
|
ERROR Returned type `IO[bytes] | IOBase | _TemporaryFileWrapper[bytes] | Unknown` is not assignable to declared return type `IO[bytes] | PathLike[str] | str` [bad-return]
--> torch/export/pt2_archive/_package.py:623:12
|
623 | return f
| ^
|
ERROR No matching overload found for function `os.fspath` [no-matching-overload]
--> torch/export/pt2_archive/_package.py:995:60
|
995 | or (isinstance(f, (str, os.PathLike)) and os.fspath(f).endswith(".pt2"))
| ^^^
|
Possible overloads:
(path: str) -> str [closest match]
(path: bytes) -> bytes
(path: PathLike[AnyStr]) -> AnyStr
ERROR No matching overload found for function `os.fspath` [no-matching-overload]
--> torch/export/pt2_archive/_package.py:1005:22
|
1005 | f = os.fspath(f)
| ^^^
|
Possible overloads:
(path: str) -> str [closest match]
(path: bytes) -> bytes
(path: PathLike[AnyStr]) -> AnyStr
ERROR Argument `IO[bytes] | IOBase | Unknown` is not assignable to parameter `archive_path_or_buffer` with type `IO[bytes] | PathLike[str] | str` in function `PT2ArchiveReader.__init__` [bad-argument-type]
--> torch/export/pt2_archive/_package.py:1009:27
|
1009 | with PT2ArchiveReader(f) as archive_reader:
| ^
|
ERROR Argument `IOBase | Unknown` is not assignable to parameter `file` with type `str` in function `_load_aoti` [bad-argument-type]
--> torch/export/pt2_archive/_package.py:1073:17
|
1073 | f, model_name, run_single_threaded, num_runners, device_index
| ^
|
ERROR Argument `str` is not assignable to parameter `object` with type `LiteralString` in function `list.append` [bad-argument-type]
--> torch/export/unflatten.py:940:24
|
940 | ret.append(f"{i}: {node.op}[{target}]({', '.join(args_dump)})")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ERROR `+=` is not supported between `SupportsIndex` and `Literal[1]` [unsupported-operation]
--> torch/export/unflatten.py:1476:13
|
1476 | node_idx += 1
| ^^^^^^^^^^^^^
|
Argument `SupportsIndex` is not assignable to parameter `value` with type `int` in function `int.__radd__`
ERROR Argument `SymInt | int` is not assignable to parameter `object` with type `int` in function `list.append` [bad-argument-type]
--> torch/masked/_ops.py:486:21
|
486 | dims.append(d % ndim)
| ^^^^^^^^
|
ERROR No matching overload found for function `torch._C.TensorBase.to` [no-matching-overload]
--> torch/masked/_ops.py:644:27
|
644 | values = values.to(output_dtype)
| ^^^^^^^^^^^^^^
|
Possible overloads:
(dtype: dtype, non_blocking: bool = False, copy: bool = False, *, memory_format: memory_format | None = None) -> Tensor [closest match]
(device: device | int | str | None = None, dtype: dtype | None = None, non_blocking: bool = False, copy: bool = False, *, memory_format: memory_format | None = None) -> Tensor
(other: Tensor, non_blocking: bool = False, copy: bool = False, *, memory_format: memory_format | None = None) -> Tensor
ERROR No matching overload found for function `torch._C.TensorBase.to` [no-matching-overload]
--> torch/masked/_ops.py:768:27
|
768 | values = values.to(output_dtype)
| ^^^^^^^^^^^^^^
|
Possible overloads:
(dtype: dtype, non_blocking: bool = False, copy: bool = False, *, memory_format: memory_format | None = None) -> Tensor [closest match]
(device: device | int | str | None = None, dtype: dtype | None = None, non_blocking: bool = False, copy: bool = False, *, memory_format: memory_format | None = None) -> Tensor
(other: Tensor, non_blocking: bool = False, copy: bool = False, *, memory_format: memory_format | None = None) -> Tensor
ERROR Class member `Combine.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/_ops.py:1018:13
|
1018 | def forward(ctx, input, mask):
| ^^^^^^^
|
`Combine.forward` has type `(ctx: Unknown, input: Unknown, mask: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `Combine.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/_ops.py:1028:13
|
1028 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`Combine.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1402:24
|
1402 | count = sum(
| ________________________^
1403 | | torch.ones(input.shape, dtype=torch.int64, device=input.device),
1404 | | dim,
1405 | | keepdim=keepdim,
1406 | | )
| |_____________^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1407:24
|
1407 | total = sum(input, dim, keepdim=keepdim, dtype=dtype)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1411:24
|
1411 | total = sum(input, dim, keepdim=keepdim, dtype=dtype, mask=inmask)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1621:24
|
1621 | count = sum(
| ________________________^
1622 | | torch.ones(input.shape, dtype=torch.int64, device=input.device),
1623 | | dim,
1624 | | keepdim=True,
1625 | | )
| |_____________^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1626:31
|
1626 | sample_total = sum(input, dim, keepdim=True, dtype=dtype)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1630:31
|
1630 | sample_total = sum(input, dim, keepdim=True, dtype=dtype, mask=inmask)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR No matching overload found for function `sum` [no-matching-overload]
--> torch/masked/_ops.py:1637:24
|
1637 | total = sum(x * x.conj(), dim, keepdim=keepdim, dtype=compute_dtype)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
Possible overloads:
(iterable: Iterable[Literal[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] | bool], /, start: int = 0) -> int
(iterable: Iterable[_SupportsSumNoDefaultT], /) -> Literal[0] | _SupportsSumNoDefaultT
(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _AddableT2 [closest match]
ERROR Class member `_MaskedContiguous.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:50:9
|
50 | def forward(ctx, input):
| ^^^^^^^
|
`_MaskedContiguous.forward` has type `(ctx: Unknown, input: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedContiguous.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:63:9
|
63 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`_MaskedContiguous.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedToDense.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:69:9
|
69 | def forward(ctx, input):
| ^^^^^^^
|
`_MaskedToDense.forward` has type `(ctx: Unknown, input: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedToDense.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:83:9
|
83 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`_MaskedToDense.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedToSparse.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:97:9
|
97 | def forward(ctx, input):
| ^^^^^^^
|
`_MaskedToSparse.forward` has type `(ctx: Unknown, input: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedToSparse.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:113:9
|
113 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`_MaskedToSparse.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedToSparseCsr.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:119:9
|
119 | def forward(ctx, input):
| ^^^^^^^
|
`_MaskedToSparseCsr.forward` has type `(ctx: Unknown, input: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedToSparseCsr.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:139:9
|
139 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`_MaskedToSparseCsr.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedWhere.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:145:9
|
145 | def forward(ctx, cond, self, other):
| ^^^^^^^
|
`_MaskedWhere.forward` has type `(ctx: Unknown, cond: Unknown, self: Unknown, other: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `_MaskedWhere.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/_ops_refs.py:151:9
|
151 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`_MaskedWhere.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `strides` with type `Sequence[SymInt | int] | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `storage_offset` with type `SymInt | int | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `memory_format` with type `memory_format | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `dtype` with type `dtype | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `layout` with type `layout` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `device` with type `device | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `pin_memory` with type `bool` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `requires_grad` with type `bool` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `dispatch_sizes_strides_policy` with type `str | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `dispatch_device` with type `bool` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `dispatch_layout` with type `bool` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `_extra_dispatch_keys` with type `DispatchKeySet | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Unpacked keyword argument `bool | device | dtype | layout | str | Unknown` is not assignable to parameter `storage_size` with type `SymInt | int | None` in function `torch._C.TensorBase._make_wrapper_subclass` [bad-argument-type]
--> torch/masked/maskedtensor/core.py:177:70
|
177 | return torch.Tensor._make_wrapper_subclass(cls, data.size(), **kwargs)
| ^^^^^^^^
|
ERROR Class member `Constructor.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/core.py:246:17
|
246 | def forward(ctx, data, mask):
| ^^^^^^^
|
`Constructor.forward` has type `(ctx: Unknown, data: Unknown, mask: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `Constructor.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/core.py:250:17
|
250 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`Constructor.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
ERROR Class member `GetData.forward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/core.py:336:17
|
336 | def forward(ctx, self):
| ^^^^^^^
|
`GetData.forward` has type `(ctx: Unknown, self: Unknown) -> Unknown`, which is not consistent with `(*args: Any, **kwargs: Any) -> Any` in `Function.forward` (the type of read-write attributes cannot be changed)
ERROR Class member `GetData.backward` overrides parent class `Function` in an inconsistent manner [bad-override]
--> torch/masked/maskedtensor/core.py:340:17
|
340 | def backward(ctx, grad_output):
| ^^^^^^^^
|
`GetData.backward` has type `(ctx: Unknown, grad_output: Unknown) -> Unknown`, which is not consistent with `(ctx: Any, *grad_outputs: Any) -> Any` in `Function.backward` (the type of read-write attributes cannot be changed)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment