id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
186903439 | S3 upload should use different key for each file
Addreses #5
Coverage increased (+0.06%) to 80.17% when pulling fe405020fcc9bed2aa9dff412f401eaeb466f9f6 on Fak3:g5_fix_s3_key into 70adf10607a2a08c3fd1617b7b02a6b902246574 on frictionlessdata:master.
| gharchive/pull-request | 2016-11-02T19:54:46 | 2025-04-01T06:44:14.686193 | {
"authors": [
"Fak3",
"coveralls"
],
"repo": "frictionlessdata/dpr-api",
"url": "https://github.com/frictionlessdata/dpr-api/pull/30",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1145284860 | Field name/header comparison should not be case sensitive
Overview
According to https://specs.frictionlessdata.io/table-schema/#field-descriptors, "name SHOULD NOT be considered case sensitive in determining uniqueness.". However, the library does not compare and ignore case. See https://github.com/frictionlessdata/tableschema-js/blob/37f02247e0ce35f382ad815121bd64877e9718af/src/table.js#L152
Can this be fixed or provide options to ignore case, please? Friendlier error message identifying which columns are the problem would also be more useful to the users than "The column header names do not match the field names in the schema" as well.
Please preserve this line to notify @roll (lead of this repository)
@paulboony
Thanks for reporting. We will investigate
| gharchive/issue | 2022-02-21T03:46:11 | 2025-04-01T06:44:14.691076 | {
"authors": [
"paulboony",
"roll"
],
"repo": "frictionlessdata/tableschema-js",
"url": "https://github.com/frictionlessdata/tableschema-js/issues/185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
265808617 | Added an option to ignore falsy headers
fixes #181
@mcarans
Please take a look.
WDYT is ignore_falsy_headers argument is clear enough?
I think ignore_blank_headers would be clearer
Tested and works ok (but recommend different arg name as in previous comment)
@mcarans
Thanks - great idea. Also in following to renaming I've narrowed blank header to be None or empty string.
Tesed, works. Thx alot!
| gharchive/pull-request | 2017-10-16T15:20:19 | 2025-04-01T06:44:14.693516 | {
"authors": [
"mcarans",
"roll"
],
"repo": "frictionlessdata/tabulator-py",
"url": "https://github.com/frictionlessdata/tabulator-py/pull/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
592551003 | The site is down
Hi @lauragift21, could you please take a look - https://frictionlessdata.io/
@lauragift21
I've fixed it in this commit and in the settings:
https://github.com/frictionlessdata/website-v2/commit/00571380edcf1d422b65d263a2cbe252ebacb516
https://github.com/frictionlessdata/website-v2/settings
Thanks @roll I had the same issue yesterday.
| gharchive/issue | 2020-04-02T11:14:20 | 2025-04-01T06:44:14.696083 | {
"authors": [
"lauragift21",
"roll"
],
"repo": "frictionlessdata/website-v2",
"url": "https://github.com/frictionlessdata/website-v2/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1369062711 | First example: work in progress
Important
friendly-pandas now needs to be run with friendly_traceback version 0.5.57.
Starting example
The content of the file example.py is as follows:
import pandas as pd
df = pd.DataFrame([[10, 20, 30], [40, 50, 60]],
index=list("ab"),
columns=list("xyz"))
Here's a sample session PRIOR TO A FEW CHANGES NOTED BELOW:
> python -m friendly
friendly-traceback: 0.5.54
friendly: 0.5.39
Python: 3.10.6
Type 'Friendly' for help on special functions/methods.
[1]: from friendly_pandas import register
[2]: from example import df
[3]: df.loc["y"]
Traceback (most recent call last):
File "LOCAL:\pandas\core\indexes\base.py", line 3629, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'y'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
Code block [3], line 1
df.loc["y"]
KeyError: 'y'
To retrieve a column, just use square brackets: df["y "]
[4]: df.loc["w"]
Traceback (most recent call last):
File "LOCAL:\pandas\core\indexes\base.py", line 3629, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'w'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
Code block [4], line 1
df.loc["w"]
KeyError: 'w'
[5]: why()
You tried to retrieve an unknown row. The valid values are: a, b.
[6]: where()
Exception raised on line 1 of code block [4].
> 1| df.loc["w"]
df: x y z
a 10 20 30
b 40 50 60
Note: since I used Friendly, the highlighting of the problematic code is done using a red background, which is not reproduced in the example above.
Instead of trying only something like df.loc[...], one can try to do an assignment; something like
column = df.loc["y"]
It will still work.
Chained exception
Note that the traceback not only includes code from the user but also a notice that another exception was raised, this one entirely inside Pandas' code. This information is not useful to someone just learning Python (or anyone else really, except perhaps pandas developers?) and it would make sense to not include it. This was done in a later version as noted below.
Note: the existing code (version 0.0.2) prevents the normal handling of KeyError as it completely replaces the existing handling from friendly_traceback. This needs to be modified.
Untested addition: if the name of a row has 2 or more characters and is misspelled, some information about possible correct values should be provided.
Broken example: using parser.add() effectively appends and default values from friendly_traceback prevent friendly_pandas from registering. I need to define parser.insert instead.
I have added four unit tests to demonstrate three cases handled by friendly_pandas and a fourth one by friendly_traceback. These are meant to be run using pytest.
With friendly_pandas version 0.0.6 and friendly 0.5.56, traceback from chained exceptions are suppressed by default:
Variables shown at the end of where() that require more than one line are not indented, which does not look good. This would need to be fixed in friendly_traceback itself, which might not be easy to do, but would be worth it.
Latest version:
| gharchive/issue | 2022-09-11T21:01:57 | 2025-04-01T06:44:14.731249 | {
"authors": [
"aroberge"
],
"repo": "friendly-traceback/friendly-pandas",
"url": "https://github.com/friendly-traceback/friendly-pandas/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1401874907 | Fixes for SPS (non-instanced) rendering
Hey! Thank you for making this, I was able to downgrade the project to unity 2019 and get it working within 2019.4.31f1 (and within VRC itself).
But I found a bug (with a help of a friend who tested for me), where it wouldn't render in VR, only in desktop. VRChat currently uses SPS (non-instanced) rendering for its VR view and that mode had incorrect depth sampling. For some reason it requires using SAMPLE_DEPTH_TEXTURE_PROJ instead of just SAMPLE_DEPTH_TEXTURE, so I adjusted code to do that and it fixes the issue.
Here's how the depth issues looked
https://user-images.githubusercontent.com/3798928/194698581-1722c5ad-9b1c-40f7-a3a4-20c2c0d1a52e.mp4
You can see how the depth values were only visible at the edges in these weird triangle bits. Not sure what unity macros are doing there, but using the _PROJ variant made it work in both SPS and SPS-I!
Also, just wanted to add an extra note here, you can make unity enable depth pass without using AO (in case of VRC where you do not have access to adjusting Main Cam properties). All you need is a directional light with shadows enabled hitting some random empty layer - and unity will enable the depth pass for you. E.g.
Oh whoops! Nice catch, I actually had a version for a VRChat project that had this fix as well, but whoops look like I forgot to include it in this version that I was pushing out publically on GitHub. Thanks for the catch! I'll go ahead and merge it, nicely done!
| gharchive/pull-request | 2022-10-08T08:44:30 | 2025-04-01T06:44:14.892650 | {
"authors": [
"frostbone25",
"orels1"
],
"repo": "frostbone25/Unity-Baked-Volumetrics",
"url": "https://github.com/frostbone25/Unity-Baked-Volumetrics/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
158158859 | How can I range all of the beacons
I am using a library for android, and in them I don't need to set uuid for ranging, and it will range all of the devices
Is that possible with your library too?
That's not possible with iOS as it's a limitation of the underlying iOS SDK. Please see #31 for a thorough explanation.
| gharchive/issue | 2016-06-02T14:44:10 | 2025-04-01T06:44:14.893822 | {
"authors": [
"RezaRahmati",
"frostney"
],
"repo": "frostney/react-native-ibeacon",
"url": "https://github.com/frostney/react-native-ibeacon/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1536363045 | Recycling + advanced PWI model for merge with master
@fsciortino @odstrcilt In view of the newest abundant updates to the master version since when I completed my model (which would create a lot of conflict), I have started implementing again my additions - recycling model + advanced plasma-wall interaction model - manually to the current master version, in order to be able to perform a clean and painless merge when I will have completed it. In doing so, according to your suggestions, I will leave out the additions I have used for the transport model in my investigation (interpolations of transport profiles and ELM model), maybe leaving them for a successive pull request, and I will introduce the PWI model as a new class.
I would leave the previous branch (recycling_model) alive for now, as it is the version which, albeit being not ready for public release, has worked so far and which includes all the stuff that I used for my personal investigation.
Hi @antonellozito,
do you think that your model can describe the saturation of walls by non-recycling gas injection? A typical example is nitrogen seeding. First, nitrogen behaves as a non-recycling impurity, but once the wall saturates, it behaves like a noble gas.
Hi @antonellozito, do you think that your model can describe the saturation of walls by non-recycling gas injection? A typical example is nitrogen seeding. First, nitrogen behaves as a non-recycling impurity, but once the wall saturates, it behaves like a noble gas.
@odstrcilt not only it can: i developed it exactly for this aim. He in W behaves in the same way. As the plasma-wall interactions as I included it rely on actual Monte Carlo calculated surface coefficients, the only free parameter for describing this effect (which must be somehow inferred from experiments) is the saturation content of an impurity into the wall. N can be modelled in the same way (although some coefficients regard the interaction of N with wall materials are still to be produced, which can still be done in the future).
Yes, I have found the right branch in your diagram. Is this R_N coefficient an input parameter or it is somehow calculated? Could I just assume that it is zero for nitrogen? Or would it be possible to do the calculation for nitrogen (and perhaps also neon) for both W and C walls? How do you include the specific shape and properties of the AUG wall?
Hi @antonellozito , I spoke to @odstrcilt about the strategy moving forward and we agree that the way you currently implemented things is OK: we can keep a single fortran call in run_aurora. Can you please complete the work by adding pwi.py? Then, we should be ready to test everything :) We should also all collectively try to simplify the code, because at the moment core.py has become a little too verbose/complicated.
Reminder: please make sure that everything is backwards-compatible!
Thanks for all your work!
@fsciortino @odstrcilt the updated version of Aurora containing the extended recycling model and the advanced plasma-wall interaction model is complete and working. As you suggested, with respect to the old pull request, I have left out some additions (like transport model and ELM model), but I have kept extended recycling model and the advanced plasma-wall interaction model. Now, as you suggested, when one wants to use the advanced PWI model a the new class pwi.py must be called, rather than core.py. So, the two things are kinda separate, at least on the python side. The call to the fortran routines, however, is still shared between the simple and the new model.
Because of my ignorance on how the inheritance of a class works, there might be some redundant stuff in pwi.py... hope not too many. However, it works.
The final commit also contains the updated examples (including the one for the advanced PWI model) and the (hugely) updated documentation. I also have succesfully merged the current master into it.
Also, you know it but it is worth to mention one more time: there are no backwards incompatibility, except for one: the out object now is not a tuple anymore, but a dictionary. All the example scripts contained in the latest commit are adapted to that, but the users need to take this into account in their personal scripts after updating. As we widely agreed, however, this is a necessary sacrifice: having an mutable object as main output of the simulation would make possible future enhancement (like this one) much easier.
I guess that it is ready for your review. Because of the major update, I took the liberty to flag this tentative version of the code as 3.0.0 (but of course we can revert it or use different numbers, if you think so).
That looks great @antonellozito ! One quick thing: I see two new directories inside the docs directory. Are they needed for some reason?
Following the latest suggestions by @fsciortino I have heavily shortened the new method pwi.py, avoiding to unnecessarily duplicate methods from core.py.
Now, at least from the point of view of the Python interface, the original model and the new extended model including a realistic description of the plasma-wall interactions for determining the recycling are really fully separated.
From the point of view of the Fortran subroutines, there are two differences of this branch wrt to the current master:
The (optional) inclusion of new particle reservoirs, i.e. divertor wall reservoir and pump reservoir (see the pictures from the docs). This has nothing to do with the full plasma-wall interaction model! They are separate stuff. All the additions coming from this extension wrt to the current master are basically in the edge_model subroutine in main.f90, which is now a bit longer than how it was before. I stress that all my additions in this regard are optional and regulated by if statements.
The activation of the full plasma-wall interaction model. In this case the changes are found in impden.f90. What is done when the PWI model is activated is that the recycling fluxes from the main wall towards the plasma, used to asume the 0th charge state profile in the transport modelling routines, are calculated by a separate subroutine advanced_PWI_model, which is equally called by both the algorithms impden0 and impden1.
Apart from other suggestions, everything is ready for a review.
That looks great @antonellozito ! One quick thing: I see two new directories inside the docs directory. Are they needed for some reason?
@fsciortino Indeed they are not. They come from my computer, I didn't notice them. I'll fix it.
@antonellozito I will review it soon. I was hoping to do it this week, but this goal is less and less likely. Hopefully next week.
@odstrcilt @antonellozito it would be a pity to leave this PR hanging like this... @odstrcilt would you be willing to do the review, and @antonellozito to address any comments on a short time scale?
@odstrcilt @antonellozito it would be a pity to leave this PR hanging like this... @odstrcilt would you be willing to do the review, and @antonellozito to address any comments on a short time scale?
@fsciortino @odstrcilt I have not worked on this branch since March, when I do my personal simulations I still use my personal version (because we removed the ELM model from this branch). However I remember that the work on this was concluded, i.e. the additional reservoirs are included in the exact same way they are in my personal version, but in a more proper way :) and with the importan addition of a whole renewed manual ;) In other words yes, it only needs to be reviewed I guess.
I have not found the time and courage to do the review. It is a considerable change to the original code...
@odstrcilt @fsciortino This will be a long comment, but I'll try to summarize here once again all my changes to the code, in order to make the review process as painless as possible. You might print this and keep it as a quick reference to always have in front of you during the review ;) I am sure this would be more practical for you, then just scrolling all the past comment. Of course, I am avaialble for any clarification about both physics and technical/coding aspects.
Note that this version of the code is, locally, fully functional, tests aside. For example, the various test_xxx.py scripts in /examples do work, provided a local "fix" of the omfit_classes as I suggested here.
All my changes, both in terms of physics and in terms of new/updated input/output of the code, are also already extensively documented in the docs of this branch. Please read the new docs before starting the review. Just to summarize it as much as possible:
The radial particle flux at the outermost modelled flux surface interacts with the main wall reservoir. This already happened in the current version of Aurora, but before absorbed particles could only be permanently retained, or released with a given time constant. Now, the interaction is much more complex/realisitc. Here, particles can either be reflected as fast ions, or be promptly recycled as thermal ions, or can be implanted in the main wall reservoir (which has a limited capability, which emulates the wall saturation effect, which has to be set by the user). Particles implanted in the main wall can be released through physical sputtering. This sputtering can be caused by the radial flux of the simulated impurity itself, and by the flux(es) of the background species. While the first one is self-consistently simulated, the second one(s) must be in some way set by the users. Particles released by the wall (either reflected, thermally released or sputtered) constitute a source for the plasma. The radial profile of this source (i.e. how much they penetrate the plasma before being ionized) is calculated according to the energy at which these are released. Reflection coefficients and reflection energies are avaialble for a numerous amount of impurities (D, He, N, Ne, Ar, ...) and wall materials (Be, C, W), extracted from a citable database. Sputtering yields and sputtering energies for impurities implanted within the wall are now only available for the He-W combination, but we plan to calculate them for other relevant combinations in the future.
Regarding the parallel particle flux (resulting from the parallel loss term in the SOL): in the current version of Aurora, this flux would directly go from the divertor reservoir, from which could flow back towards the plasma or being pumped. Now, it will interact with a divertor wall reservoir. This interaction is by all means identical to the interaction of particles with the main wall. The main difference is that the released particles from this reservoir will go towards the divertor reservoir, and only from this moment they can either flow back to the plasma or being pumped.
There are also further additional options regarding the recycling/pumping model:
It is possible to define a screening coefficient, determining how much of the "backflow" from the divertor reservoir really goes back to the plasma, becoming a source for it, alternatively to go back immediately to the divertor wall. This is user-defined.
It is possible to define a recombination coefficient, determining how much of the parallel particle flux really goes towards the divertor wall, undergoing the plasma-wall interaction, or alternatively to go directly towards the divertor reservoir. This is also user-defined.
It is optionally possible to add a second neutral reservoir, the pump reservoir, from which pumping is performed (this was particularly useful for simulating, e.g., the AUG vessel geometry).
This was an extreme summary of the changes included in this PR. I attach here again my usual sketch, which the community has learnt quite well in the last year :) You will find many more sketches, as well as practical examples of all these functionalities, in the docs.
Of course (and I really cannot stress it enough), all these changes are pure optional additions to the code, and are not activated by default. If a user doesn't want to use this advanced plasma-wall interaction model and this extended recycling/pumping model (or doesn't even know of their existence), the code behavior would be completely unchanged wrt the current Aurora master version. All the thousand additional lines of core are completely invisible, if one doesn't explicitly call them. You can test this against your own Aurora simulations, if you want. Also, although the new/modified fluxes and interactions are deep coded in the main fortran algorithms (which are always called, also in the case of "basic" simulations), the presence of these new lines in the fortran files does not affect at all the velocity at which simulations without the new models are executed.
There is one single backwards incompatibility, not related to the physics model but with the output of the code. Namely, the main output, i.e. the output of asim.run_aurora, is not anymore a tuple but a dictionary. We concluded that this was a necessary change, because a tuple was much un-flexible in terms of future devopments (and, like in this case, addition of new output fields). So, for example, now the plasma density after a simulation would be called as out['nz'] instead of out[0]. All the test/example scripts in this PR are already amended accordingly, and the same needs to be done by the user scripts, if/when they will want to update to the new Aurora version.
Here follows a brief summary of the main changes to the various files:
core.py: The various additional entries in the input namelist are loaded, and it is automatically checked if the selected values are allowed. While calling the main Fortran algorithm, among the inputs there are also the user-defined initial values for the density in all the various reservoirs (before it was possible, if I remember correctly, to set an initial value only for the plasma, while all the other ones did start only from 0... this does not really make sense, if one wants to solve an initial-value problem). Additionaly, the fortran algorithm accepts now many more inputs. The output of the call to the main Fortran algorithm is now a dictionary, and not a tuple anymore, and there are many more additional keys related to the new fluxes/reservoirs. The function check_conservation has been renamed to reservoirs_time_traces (which is more consistent with its content), and additionally to check che particle conservation, it always displays all time traces, including the reservoirs which are not used (in which case they are just always zero). It is important to note that this is default class which is called for simulations without wall model, as before (because, in case the wall model is desired, one has to call pwi.py, see later). Therefore, in this class all inputs related to the wall model are set to zero (or consistent with deactivated wall model), and as such are passed to the fortran algorithm.
pwi.py: This is a new file, which is the class which needs to be called (instead of core.py) when one wants to use the wall model. It inherits all the methods from core.py, and adds new one needed to set up the wall model. In particular, we have the functions:
setup_PWI_model: starting from the inputs present in the namelist, it prepares all the fields to pass as input to the fortran algorithm, related to surface coefficients, impact energies, etc., mainly calling functions from the new preparation file surface.py (see later).
get_background_fluxes: it prepares the fluxes of the background species onto the wall, starting from user inputs, to be passed to the fortran algorithm.
get_impact_energy: it calculates the impact energies of the various fluxes onto the walls.
setup_rec_profs: it prepares the radial profiles of the sources, in the plasma, coming from neutrals released from the main wall (reflected, thermally released, or sputtered), as function of the energies at which they are released.
And then there is again the call to the main fortran algorithm, which is the same as done in core.py, but with inputs related to the wall model not set to zero.
Also, there is the function PWI_time_traces which, in the same way as reservoir_time_traces does, automatically plots all the main time traces (fluxes, surface densities, etc.) related to the wall model.
main.f90: Here, most of the additional/edited lines are caused bu the numerous additional inputs to, and outputs from, the fortran algorithm. Additionally to that, there is the passing of these new additional inputs to both algorithms impden0 and impden1, and, in the subroutine edge_model, we find the additional calculations related to the extension of the recycling/pumping model, i.e., as mentioned before:
The fact that the parallel losses can go towards a divertor wall rather than directly into the divertor reservoir.
The possibility to define a recombination efficiency in the divertor.
The possibility to define a screening efficiency for the backflow from divertor towards plasma.
The possibility to define an additional pump chamber.
The possibility to set physical dimensions to the neutral reservoir(s), which allows to define the pumping through an engineering pumping speed rather than with a pumping time constant.
impden.f90. Only here we finally find the actual calculations which make the wall model. The main idea is that, if the wall model is not activated, the fluxes from/into the walls and the wall surface densities are calculated in the old way. If the wall model is activated, instead, for both algorithms impden0 and impden1 the fluxes from/into the walls and the wall surface densities are calculated calling the same subroutine advanced_PWI_model, which is put at the end of the file and is made by relatively few lines of code.
surface.py: This is a new file, which contains several functions aimed to read, interpolate and process surface data (reflection coefficients/energies, sputtering yields/energies, etc.), starting from the appropriate datafiles, in order to use them as inputs for the code. Generally speaking, I wrote this inspiring to the atomic data counterpart atomic.py.
trim_files.py: This is a new files, which is used to load the appropriate surface datafile for any type of interaction and for any impurity/wall material combination. Also, I wrote this inspiring to the atomic data counterpart adas_files.py. The various datafiles are present in the folder /aurora/trim_files, and are the same ones used by B2.5, within SOLPS-ITER, for modelling the plasma-surface interactions, and are extracted from an IPP report by Eckstein from 2002 (which is of course adequately cited). This is true for reflection, bulk sputtering and implantation depth. The datafiles for the sputtering yields/energies of impurities implanted within a given wall material, instead, were written on purpose for my PhD work, and, as already said, for now only the combination He-W is present.
All the new possibilities related to the wall model and to the extended recycling/pumping model are also adequately documented in further examples scripts, put in the /examples folder. There are also some example scripts for how to read and plot surface coefficients.
I guess that this is it, for now. I also quickly mention what I already have in the to-do list, once the review will be completed:
Re-introduce, in the online branch, the methods and functions which allow to periodically modulate the transport coefficients to set for the simulation, in order to emulate ELMs, and the various plotting/output functions related to this. Although I used this for my personal investigation, you asked me to leave it outside this PR for now, also in order to alleviate a bit the review job. However, since it was, apparently, the part of my results which caught most the attention of the audience so far, and since I really see this as a fundamental part for future integrated modelling investigation, I really think that it will be worthy to put this online as well.
Generalize the wall model in order to simulate not only impurities implanted into the wall, but wall impurities as well. Namely, to be able to model the transport, in the plasma, of eroded wall species (C, Be, W, ...) in a self-consistent way, i.e. without the need of specifying an arbitrary edge source to match the experimental data.
I hope you will find this long and detailed summary useful for the review job! :)
| gharchive/pull-request | 2023-01-17T13:15:00 | 2025-04-01T06:44:14.941554 | {
"authors": [
"antonellozito",
"fsciortino",
"odstrcilt"
],
"repo": "fsciortino/Aurora",
"url": "https://github.com/fsciortino/Aurora/pull/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
207053088 | Garbled path info in the warning about missing directory
Here is the script that causes the issue
#I "some\missing\folder"
let x = 42
Some ways to see the issue:
(1) Open it in Visual Studio 2015 (F# Power Tools is installed). Hover the
mouse over the expected error or look at the error pane. Note that the tip or
warnings message look like:
Warning The search directory 'some\missing older' could not be found
(2) Open it in VSCode (Ionide-fsharp is installed). In the tips and the problem
pane the message looks like 'some\missingolder'. Copy/paste from the problem
pane gets this text with unexpected "♀" instead of expected "\F":
message: 'The search directory 'some\missingolder' could not be found'
(3) In my application using FCS the formatted message looks like this (same
unexpected "♀" instead of expected "\F"):
warning FS0211: The search directory 'some\missing♀older' could not be found
Not sure if this is a bug. In https://docs.microsoft.com/en-us/dotnet/articles/fsharp/language-reference/strings \ is defined as escape character and \f seems to be:
> "\f".[0];;
val it : char = '\012'
Which seems to be not documented and seems to generate different outputs depending on the used console. From http://www.asciitable.com/ it seems to be the character to start a new page.
I think you depend on undefined behavior here. You should use:
#I @"some\missing\folder"
let x = 42
or \\ instead of \.
However I'd be interested in clarification as well. /cc @dsyme. Maybe this should be posted in visualfsharp as well (Question on using \ on an undefined next character)? Or maybe documentation should be extended.
Just to add some context. C# (https://msdn.microsoft.com/en-us/library/aa691090(v=vs.71).aspx) defines it like this:
A character that follows a backslash character () in a regular-string-literal-character must be one of the following characters: ', ", , 0, a, b, f, n, r, t, u, U, x, v. Otherwise, a compile-time error occurs.
Table is here https://msdn.microsoft.com/en-us/library/aa691087(v=vs.71).aspx
Imho this is a better default to what F# seems to be doing currently.
Thank you. It's my error, indeed. I define garbage, F# just shows it. I am closing this.
The reason of my silly error is that the script uses a missing path deliberately. With a real path, I would notice my error at once.
| gharchive/issue | 2017-02-12T14:24:34 | 2025-04-01T06:44:14.967073 | {
"authors": [
"matthid",
"nightroman"
],
"repo": "fsharp/FSharp.Compiler.Service",
"url": "https://github.com/fsharp/FSharp.Compiler.Service/issues/698",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
161043251 | Missing support for BigInteger?
Description
Trying to use BigInteger (bigint) in an F# program does not seem to be supported by Fable.
Repro steps
Create file bigint.fsx with the following contents:
let x = 123I
let y = 35
printfn "%A ^ %d = %A" x y (System.Numerics.BigInteger.Pow(x,y))
Expected behavior
Should build with no warnings or errors. Should produce the following output (as shown using fsharpi):
don@spearmint:~/fable/bigint$ fsharpi bigint.fsx
123 ^ 35 = 14017769046734148148805244022693101859937312998675419766920141412796613507
Actual behavior
don@spearmint:~/fable/bigint$ fable bigint.fsx
Start compilation...
Cannot find replacement for Microsoft.FSharp.Core.NumericLiterals.NumericLiteralI.fromInt32 (L1,8-L1,12) (/home/don/fable/bigint/bigint.fsx)
Known workarounds
(None known.)
Related information
Operating system
don@spearmint:~/fable/trimend$ uname -a
Linux spearmint 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux
Fable version 0.3.22
.NET Runtime, CoreCLR or Mono Version
don@spearmint:~/fable/trimend$ mono --version
Mono JIT compiler version 4.2.3 (Stable 4.2.3.4/832de4b Wed Mar 16 13:19:08 UTC 2016)
Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
TLS: __thread
SIGSEGV: altstack
Notifications: epoll
Architecture: amd64
Disabled: none
Misc: softdebug
LLVM: supported, not enabled.
GC: sgen
I would like to volunteer to help implement bigint and decimal support. I have some experience with implementing high-precision math libraries, and I am motivated to use Fable on my own project called Freefall, which requires bigint. Would you be supportive of me creating a pull request for this?
Does it make sense to make bindings for BigInteger.js instead?
@cosinekitty, I'll be more than supportive to anyone willing to contribute to Fable with a PR 😄
@7sharp9, we had a similar discussion when deciding how to add decimal to FunScript (which wasn't done at the end 😅). Probably, it would save work to use existing JS libraries for bigint and decimal. The main problem here is how to include the libraries with the generated code. I can think of three possibilities with their own pros and cons:
Leave the responsibility to the developer: I then foresee many bug reports because they forget to do it 😉 Maybe we can detect when a lib will be needed and throw a reminder with every compilation.
Add the library directly to the generated code: the problem then will be duplicity. If you create two packages using bigint, each one will include their own copy of the library.
Try to add the reference automatically: this is complicated because Fable can be used in many environments, as you know very well, loading dependencies in different ways: in the web with require.js, in node with commonjs, in Fuse...
What do you think? I'm open to any ideas.
I know this question wasn't addressed to me, but here's my opinion anyway! All (supportable) classes from the standard .NET library should come included with Fable. New users should not have to struggle with the tools to get existing F# building. This will minimize any adoption barriers for new developers. Dependencies on external projects create opportunities for things to break at unpredictable times, so I would favor bringing all core functionality under the umbrella of this one project. Trying to reduce the core library size dynamically would be at the very bottom of my list of priorities: first get stuff working, then optimize if/when it is ever needed.
I'm willing to take on implementing bigint and decimal types and contribute the code to this project under its existing license.
I think that fable should only support a subset just like F# does with android and ios, both of which don't have reflection or bigint.
FSharp.Core is after all target based and Fable just needs to support a type of footprint, perhaps a Fsharp.Core for Fable is what should be added. I believe this is the correct course of action as it matches existing logic.
@cosinekitty, the question was not meant for anybody in particular, I'm more than happy to hear everybody's opinion :)
I agree that we should try to support as many classes as possible from .NET, but it's also a matter of resources and what we should prioritise.
In any case, my question wasn't exactly about whether to implement bigint but how to do implement it. I agree with @7sharp9 that using an existent JS library would save work, I was just wondering how we should add the dependency. I'm considering now that may be we can add BigInteger.js as a dependency to fable-core so npm will always download them together (if I'm not mistaken, it would be a peer dependency in that case) and then the developer can use a bundler to actually add the library to our app or not depending on whether it's indeed required.
Sorry, I'm closing this as there're no plans to provide support for BigInteger and Decimal at the moment. If you want to give it a shot, please send a WIP PR and we can work together there. Cheers!
| gharchive/issue | 2016-06-18T21:02:58 | 2025-04-01T06:44:14.992483 | {
"authors": [
"7sharp9",
"alfonsogarciacaro",
"cosinekitty"
],
"repo": "fsprojects/Fable",
"url": "https://github.com/fsprojects/Fable/issues/205",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
42750865 | Handle NuGet Build .props .targets
Unless I'm mistaken, Paket does not yet hook in .props and .targets files as NuGet has done since 2.5; I believe it should.
The primary reason is that more often than not, it presents an appropriate way for packages to manage adjustments to the buld process such as this
Key differences between such .props (imported at top of .*proj) / .targets (imported at bottom of .*proj) and the .ps1 are:
no reliance on VS
can deal with TargetPlatform etc. without silent problems and/or need to reinstall packages
doesnt induce diffs when swithcing build output type
Note that .props is wired into VS with an editor for editing MSBuild Properties entries.
conversation is happening at #516
| gharchive/issue | 2014-09-15T08:34:58 | 2025-04-01T06:44:14.996697 | {
"authors": [
"bartelink",
"forki"
],
"repo": "fsprojects/Paket",
"url": "https://github.com/fsprojects/Paket/issues/101",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
911210389 | General question about project interaction with fsspec?
Hi,
I am discovering this project and would like to know about its status?
It is presented as a fsspec implementation.
Is it connected to fsspec and can files be managed on Google Drive by using fsspec?
With fsspec, can we pip install fsspec[gdrive] ?
I have been working with Google Drive API v3 a while ago, relying on service account to provide a connector to Google Drive within cryptostore project (you can see the documentation page here).
Is gdrivefs able to manage connection by service account? (wondering if this connector I did could not be replaced with fsspec / gdrivepec?)
Thanks in advance for any feedbacks!
Bests
This repo was indeed designed to be the gdrive implementation for fsspec. It is incomplete and untested, but seems to work for at least some uses. Indeed, we don't really know how to set up any kind of tests, except for vcrpy (as gcsfs uses), which is a painful process. gdrive is not really mass storage
With fsspec, can we pip install fsspec[gdrive]
No, this is not in the options list for fsspec, because of its very alpha status (although it is listed in the fsspec docs).
Is gdrivefs able to manage connection by service account?
I don't know! Certainly, if you have code that can authenticate using service accounts, it would be a useful addition here.
Thanks for your feedback @martindurant !
Certainly, if you have code that can authenticate using service accounts, it would be a useful addition here.
Ok, I am keeping this in my radar! I won't promise it anytime soon, but at some point in my own project, I will have to deal again with connection to cloud storage. I feel doing it with gdrive in this project might profit to others as well.
Thanks again!
gdrive is not really mass storage
10Go for free is quite decent I feel :) ah ah but ok, it's not in the To scale...
10Go for free is quite decent
I meant more in the sense of latency, concurrent connections and throughput. It being free for people not willing to pay the big cloud vendors is a big plus. Be sure to also checkout the fsspec dropbox implementation, which is more complete.
Hi @martindurant . Could you give a short update on maturity and status of this project? Should it still be considered experimental? Or stable and mature? Or somewhere in between? Thank you.
Thanks! I don't suppose there is a list of supported workflows etc. for various filesystems etc.?
There are 24 built-in implementations. Are those fully supported in terms of fsspec.gui? Or does the same apply as for the gdrive external package?
24 Built-in: API Reference — fsspec 2024.2.0+1.g510ca5d.dirty documentation
15 Through external packages:
https://filesystem-spec.readthedocs.io/en/latest/api.html#external-implementations
Do people use other GUI-based utilities with fsspec, that are perhaps more popular / de facto standards? Or does every one just use code?
To me it seems like
Related discussion reference for others: https://github.com/fsspec/gdrivefs/pull/42
| gharchive/issue | 2021-06-04T07:27:05 | 2025-04-01T06:44:15.022807 | {
"authors": [
"Coderambling",
"martindurant",
"yohplala"
],
"repo": "fsspec/gdrivefs",
"url": "https://github.com/fsspec/gdrivefs/issues/23",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
799733102 | Type 'Optional' does not conform to protocol 'AnyArrayOrSet'
PredicateKit fails to build in Xcode 12.5 beta, with the following error in Predicate.swift at line 622:
Type 'Optional' does not conform to protocol 'AnyArrayOrSet'
Thanks for reporting this. I'll look into it.
Fixed in 1.3.0.
| gharchive/issue | 2021-02-02T21:56:35 | 2025-04-01T06:44:15.024869 | {
"authors": [
"ftchirou",
"joel-perry"
],
"repo": "ftchirou/PredicateKit",
"url": "https://github.com/ftchirou/PredicateKit/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
260422007 | Failure message does not update.
The Jelly color changes to Red when jamf.log reports "Installation failed. The installer reported: installer: Package name is..."
Then "(AppName) failed to install. Support has been notified." is displayed in Red at the bottom of the window.
If the pkg is reinstalled successfully while splash buddy is running, the Jelly turns Green once the install succeeds. However, the Error Message in red at the bottom is not cleared. This is not accurate or desired.
Hi Matt, it is solved in master but not yet in 1.1
| gharchive/issue | 2017-09-25T21:32:45 | 2025-04-01T06:44:15.026774 | {
"authors": [
"ftiff",
"matthewsphillips"
],
"repo": "ftiff/SplashBuddy",
"url": "https://github.com/ftiff/SplashBuddy/issues/42",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2710209512 | Support Request for T8531 Smart door camera lock
Support request for T8531 Smart door lock with Camera
A clear and concise description of what you expected to happen.
On enabling the add-on and the HACS integration I get only 2 active sensor Debug (device) and Debug (Station). Expect to see a camera stream and Lock/Unlock options. Tried on HA Supervised and Core versions (with Eufy-Security-Ws docker) with the same output
Additional information
Go to Settings -> System -> Repairs -> Click on 3 dots (...) -> System Information and get Version and Installation Type to below fields;
Home Assistant Installation Type (OS - Supvervised - Core): 13.2
Home Assistant Core Version: 2024.11.3
Eufy Security Add-on Version: 1.9.1
Eufy Security Integration Version: 8.1.0
Using go2rtc Add-on or webrtc custom integration with integrated go2rtc: Webrtc 3.6.0
Hardware Information;
Camera Model: E330 T8531
Live Streaming Protocol (RTSP/P2P): Not working. Do not see the lock or camera
Debug Logs from Home Assistant (https://github.com/fuatakgun/eufy_security/discussions/624) - Without this, I will automatically resolve this issue
Logs from Eufy Security Add-on (either from Add-ons page of Home Assistant or Docker output):
'66b7a5a8c0aa22b5184de79b96f78172d7e31ef8d52d': {
house_id: '66b7a5a8c0aa22b5184de79b96f78172d7e31ef8d52d',
user_id: '66b7a5e56c383d3adb4c7d3f7ab3e909d8f8d52d',
admin_user_id: '66b7a5e56c383d3adb4c7d3f7ab3e909d8f8d52d',
role_type: 2,
house_name: "kee***'s Home",
is_default: 1,
geofence_id: 0,
address: '',
latitude: 0,
longitude: 0,
radius_range: 0,
location_msg: 0,
create_time: 1733102964,
away_mode: 0,
home_mode: 0
}
}
}
2024-12-02 01:31:04.335 DEBUG eufy-security-ws:eufy-security-client [http] [HTTPApi.request] Api request {
method: 'post',
endpoint: 'v2/house/station_list',
responseType: undefined,
token: '5885f7dc6590f7728a133c1b897389ba635709c232f3802a',
data: {
device_sn: '',
num: 1000,
orderby: '',
page: 0,
station_sn: '',
time_zone: -18000000,
transaction: '1733103064334'
}
}
2024-12-02 01:31:04.424 DEBUG eufy-security-ws:eufy-security-client [http] [HTTPApi.request] Api request - Response {
token: '5885f7dc6590f7728a133c1b897389ba635709c232f3802a',
request: {
method: 'post',
endpoint: 'v2/house/station_list',
data: {
device_sn: '',
num: 1000,
orderby: '',
page: 0,
station_sn: '',
time_zone: -18000000,
transaction: '1733103064334'
}
},
response: {
code: 0,
msg: 'Succeed.',
data: 'OdCHHRDhY6x95lFsSXDlezreDTc89WKXQG5gcJYkrK+WshL1s1ybItoJPKiXvAg6uxVeteFNzmUqFLtWtu3GzFrbraDVAnLTXMEZbcJJhNqS6crmYROFDqk+a3nc7HYifvdeQqws0Nnhvx3388AmHuooGiEMtG/pYezskKS3xgRcuuq646FTQYDFFRL09vKnDtv5hqLFmpklnGLq7vZQG+VsrhcYAGCNR9yZ07zZhkNFxrIyBFnfeWDFNZXpUhMGh592D4Q6yoNNqY6X+H7PiZbSi8QlOKzCgndViu4rrvYFwan+jqLLk+hq0P1OuapElwiEmO9bpQeMDRP4z32iEAAe0ppQvndpST/0ogxZRPdBH9RzpX5OZEMIYu3u3Eel7dG8xzGqyQDGBDsO6SBEQaQl7L1GFavFPMPUzv6VZjuEF5CGAv5RyZ58JEJgBmHPaQbLsLpYtA0pnTLVqsB/CbbQ7Uk4Bj6kVGHVBPx4sDHa90zIeDHR064yu4eTAHM4ZbyVAdiadNW6JvOY7aMPLbQMRnGMfebJOTccqQt/yBVwcKfqQckxjWuYuU+r6b1Mc7RQPxROSLT1XWjcGOPCh6xaApgK9IdQY/rCZcZNamzqlp1LoMbZoRb9sb5uEaD+RgU5F01A77/4f3muM3s+At6P9jieA0woHTEuohLG1tRSY66mJmt5ZT21Dk/4pNFoNmC+JXX9K4AKbmEsmSkuMDDOqJLlRMCIZVeklqcAiKkUIcmgv0nYObh8qlGppURN62xoOtivXXKHl0Yx0Hpk5/++EmEjniIS1D5ZWqUMIvau5Bm8JG1Le0K5bNv08gxEFyPp6HOdItfaT5QdzCG/SixxOo7zD2lk1oVpQognHvZ3lpa8SvRXQi96F3RTgAGpHbS3Xj95HPYpVW3v64k0jDD3nqsTqyMFNNt1eWTUe3xSMb+KTEkJVBfewxOb4PcIo8XeiwF/bM3hxUVSvFiP/V2ZMj5BlIceiTnCtuj5MN1gVvfReXoMS3p9M8+XAigFwHCj0hTCMeBgS/CWIXx97ST1XcSh9NLZ0+Od1/9BDmqueHKPu+FRPEq4DlTgTFEq91G6o6JCMSZN9oeyM6MorTpcZhnmxmpXwmusmIPug4qmnR5F/aSlBmQsm2Xfsr2rpb9Cnmvxi9cd6D8HaaXgWcfeBq5fj8ZppK2uIpeLvOQ5e/xsKqApw9NzTKqpk880sXMvP3lzshUfboB63KJoXMGC2zHOAzc+89DxJTAAkLIxSkZ+HkEa/SKJqRE0LrMiRPmgpaR0faoJtoWB1zFMjXohbgWvR3XtaGY47U9LzbL9E2Sit8caiqV4A7FeLBJPQzgTrWwbfbbWmD+cfQvMg4MeqC+BZ3aDU5+NDwPbAeJ1SI+IWHRuj+0LkMLY24VKnpIC/LexMPoApPdx027VdS3o+7giy5OTSf4P6a8SduJqrRSvKZNLHk7JRO9tM1r9c087DEXyraIeMCWWOpst2QVDkoPp0CnPmXrvtNkMzcC4QVx+eEpr+Ff2LXOt4cp+2uE8Mjpatv2F0vWS1amQSedPqL+pmvMJH9Gw7r09PZcrEutf1L2j0fNXFoGQItXzK84nWezIoKJ9cEh26XQRIH7IRF8ENqikFwwBe2h0tUPaGJ9DCzhtuLoCloLYKAMcKtqpGewanvNnthvCNqqM5TyG+UbzXwEbGBDmpFUiAa/+lzXD1WIqlO9XEYBJh3H5KEEPOff1Vx3J9pWIq4L0sDQ5K6zvaRj+SMJrhaYcp/LdPDHLJAzgx6+FklMy2EDtqKUN1mGhr2nVI9QaRxjvMJi5YFc3a25/V0fWp1iV6y0F3HrEO3FVZXcD9lYefR/ABPSJCtZkPP0L7nv/XYYYs4pdy4OoGix9pRSUkT3g31IUPx6wgFTiTAeCTGPFp7MMmQJX5krDkdRzX7L95tjs1RBnsVwnJi7yxwpzln5PCqIicEMdjznw8SSfMpwEI72h9b8D3ohP4y4An3HnTqSz8SzP7xQEsv3AYDCO/2aaIDJX5UQ13qyaJylycAHa21JCj/+gbhMJHf0YU/3UP4vEu+fMqoJ5SRb2AbG1ONYYxVAEkTKn4Q5rNFwkgGXoUhblybRbZ3Mvdw1cdb8w/0KexNhVr0Pf2py6J+4XljsIZRwOieRb6WJ/ZdUtdV5mpmiKEUFBDjPoZyBk3GH6BA4FojkMKdtYkrOKpjnsebWzGAeEBWDZyjaVBNjzOLqRaX/SCHCZ2Lban6CFCjXOYQqe7aiuqtVxTDttFRDkzgalgU7NM4YSsk4fODwbrSRYics/uD2+NFBxI1mgO4xQPKiaT7AmdGjyb5UTtQs/OQjyTt3JpzAYF282J98PYCzq1jfViRwUGEKyNxJR/7e/xLVhg62/sJreiE1P06FMfBABuFBXXm1exkoAoV5lCBtzKV5MxZ9LeIyvjEPGA07MGMOypZJhBHhozmaDvxG0DbfVCguu++2YYMvUYP1LOFex1umgP0H9D/Cm/qtLxQV9UwRP0b9B926jMXC0nqbR/BPZw4OYN9zT+L45S7DRvEU2wektDt68B8+HDBzX+GPyD2BQNfk2HRdeo+muZA2OwM5RJsv8KzlFJHdmm2vtIvYNNf3g8X6ZPDlbPBnrmGfRX37rR34qOaBPhpFhUNAc9jUhgkinB7HF/mf4CucF6BN0Y7YS3SMYHWlhoa+3VJa5civgmQ9DvOl0qMyeBQGw52LT0R1rmfzSrObSzWnbMa5CbN+ZL6rLHT2wAExIEEVfcOe0vo5RZW+pl3pypUTay/WjlFt8kwCXDmN/rBidFzrv1UfHtTgBJbOFdJzR/ki+oYjSwiXudSAa2aWn/oIQvntEVnCtZ0nKIGLqRD8hs/KoucO6nSdAwGGQPef9EoqLoxnEVvRJHXKG8oZwHUnh7ORCln7IoW7M83UE2Rr+9Bh6fy3fPJoVGUF1jHU10pk26BQE0hckx8qHCcKsQBdYkfi77KHaoAl95LgzERPYxfodfAO3p+DjKgDCPCU+9l0fgVVguHZgEWO19g+pQvKybUQsg690XrYF/rhYaG1k+QvqPGap/7WSRAHG7zcs6VAx8fHf3rjpTre6Kr2I6ZsAXktelMTqG7Rj52lXrxfJ6oMRg7JdfwG76JU7L1heny39JZSVBDd/pHe86RVgi8FHw5hW3dORltrANPEcsnRC0x8R4Y8AH91yiA0AjWbEshnGw1ZjUIUvRXSMXdn0edOHfNImf2C8ge9V//nH5gr9HzBSx1619+UecQbayRy0bXsGJGYSrCqojs2FySPSE/FRyyusgHpW5rsb9Z42tg3XJurV2iRMp6tsRwPToE7a8A2pG2/c5nEsQ3/m+2r5iyx8C0Pg68LdlChIiroJ49iRca4ObyE8PQQbRKpFl+1D3BvzHW3yThtZ7rvQ6c5S0lo1CRUWtO2onmuNqs8h8uSJMIecyXzMAMjQ2sSwYMV1RbKH4mw8r4QMI3BOjkb32ItJSWmTtjw33pfkRqr3l3K+FIAieJwcRwdHDkytCd7lEc+/qN9ipvpkcrMWnqzmIOyhVjGC8Fqmk8b9p4kpE/+TvaIN8xBaoJCjdbKtT3UgStQnDO3fZkxux5e7qQsWqpIhPiSWS4GJvPrhuN9VNBcSFs+sz93xpZVBbtEsDzCvfuq867jsu8od3vYr5eFjWYAgUiUQZVqYlqge6R7/otgGZo5RGIfKVHhxGTACJI+ItlxPG3Sp+N7pbu4t2x1s3DTZVXqJEgK3PlIrxGQ1NMtG4j+/0wNXOzZpk9pfiNzl4dDjNit6NkcSz4Jbq8oz1Ujl4TODu64JCiDN7TJ3EUNIjhb/k7Oho7WbeZ0T3sr7bfZcHjpI0dLmIPkdo41MGLqp2iYV3cEiwvQx+s62jBwxzcIuA3XC0VUpASsuog6o5Np0fOGVbIyxTBHWY5aPw8wEbJfsvdHqNWpnMSoGZfbmUKPN3WWv'
}
}
2024-12-02 01:31:04.426 DEBUG eufy-security-ws:eufy-security-client [http] [HTTPApi.getStationList] Decrypted station list data [
{
station_id: 1130668997,
station_sn: 'T8531K1024290ABA',
station_name: 'Front Door',
station_model: 'T8531',
time_zone: 'EST5EDT,M3.2.0,M11.1.0|1.1159',
wifi_ssid: '',
ip_addr: '192.168.30.12',
wifi_mac: '102CB150F1B3',
main_sw_version: '0.1.3.9',
main_hw_version: '0.0.0.3',
sec_sw_version: '0.4.4.5',
sec_hw_version: 'AT+ANKER_GET_HW_VER V2',
ai_kernel_version: '1.1.2',
ai_rootfs_version: '1.1.9',
ai_algor_version: '1.1.2',
sec_sw_time: 1733090346,
ai_kernel_time: 1733085149,
ai_rootfs_time: 1733085149,
ai_algor_time: 1733085149,
bt_mac: '102CB1A6444D',
device_type: 189,
event_num: 12,
sku_number: '',
lot_number: '',
cpuid: '62e8d76eca856b3c',
create_time: 1722406636,
update_time: 1733103025,
status: 1,
station_status: 0,
status_change_time: 0,
p2p_did: 'USPRBMA-187385-VKNVV',
push_did: 'USPRBMA-187385-VKNVV',
p2p_license: 'XZWNJJ',
push_license: '',
ndt_did: 'USPRBMA-187385-VKNVV',
ndt_license: '',
wakeup_flag: 7,
p2p_conn: 'EFGBFFBKKEJKGGJJEJGIFKEGHBMPHBJJCFENBNHDBDNMOPOBGNAIHMODCMLOINOGAONEPFDALPINAJDJMNJIJNFKMIOHAHCNEGCKHLBCJKLLEEDKAOBDPEJDGN',
app_conn: 'EBGCFGBNKDIHGMIMFJHMFCFAGHMPGPMEGEEOADDDBFIAKOLICPBPCLOOHCKGJGKFBHNGLKCCPDNFAN',
query_server_did: '',
prefix: 'USPRAMD',
wakeup_key: 'N2VlNTQ2OTAwNzY5',
signaling_servers: [
'https://webrtc-signal-us.eufylife.com',
'https://3.140.98.24'
],
member: {
family_id: 0,
station_sn: 'T8531K1024290ABA',
admin_user_id: 'f6c82f38400b2e8a6a97aee07d2e3b59764ee24f',
member_user_id: '66b7a5e56c383d3adb4c7d3f7ab3e909d8f8d52d',
short_user_id: '0005',
member_type: 1,
permissions: 0,
member_nick: '',
action_user_id: '',
fence_state: 0,
extra: '',
member_avatar: '',
house_id: '',
create_time: 1733102966,
update_time: 0,
status: 0,
email: '',
nick_name: '',
avatar: '',
action_user_email: 'ravilesh@yahoo.com',
action_user_name: 'ravilesh'
},
params: [
{
param_id: 0,
station_sn: 'T8531K1024290ABA',
param_type: 1140,
param_value: '1',
create_time: 1733090307,
update_time: 0,
status: 1
}
],
devices: [
{
device_id: 1141292399,
is_init_complete: true,
device_sn: 'T8531K1024290ABA',
device_name: 'Front Door',
device_model: 'T8531',
time_zone: 'EST5EDT,M3.2.0,M11.1.0|1.1159',
device_type: 189,
device_channel: 0,
station_sn: 'T8531K1024290ABA',
schedule: '',
schedulex: '',
wifi_mac: '102CB150F1B3',
main_sw_version: '0.1.3.9',
main_hw_version: '0.0.0.3',
sec_sw_version: '0.4.4.5',
sec_hw_version: 'AT+ANKER_GET_HW_VER V2',
sector_id: 0,
event_num: 0,
wifi_ssid: 'DDMM',
ip_addr: '192.168.30.12',
ai_kernel_version: '1.1.2',
ai_rootfs_version: '1.1.9',
ai_algor_version: '1.1.2',
sec_sw_time: 1733090346,
ai_kernel_time: 1733085149,
ai_rootfs_time: 1733085149,
ai_algor_time: 1733085149,
bind_time: 1733090341,
bt_mac: '102CB1A6444D',
local_ip: '',
language: 'en',
sku_number: '',
lot_number: '',
cpuid: '62e8d76eca856b3c',
create_time: 1722406636,
update_time: 1733103025,
status: 1
}
],
sensor_info: null,
is_init_complete: true,
virtual_version: '',
house_id: 'f6c82f6ae2481127db40cb832f99f44c7b53654ee24f'
}
]
Would you share your device with me in case further debugging required? (Yes/No): Yes
Log from HA
This error originated from a custom integration.
Logger: homeassistant
Source: custom_components/eufy_security/eufy_security_api/web_socket_client.py:63
integration: Eufy Security (documentation, issues)
First occurred: 5:30:46 PM (12 occurrences)
Last logged: 9:00:39 PM
Error doing job: Exception in callback WebSocketClient._on_close(>) (None)
Traceback (most recent call last):
File "/usr/local/lib/python3.12/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/config/custom_components/eufy_security/eufy_security_api/web_socket_client.py", line 81, in _on_close
self.close_callback(future)
File "/config/custom_components/eufy_security/eufy_security_api/api_client.py", line 317, in _on_close
_LOGGER.debug(f"on_close - executed - {future} = {future.exception()}")
^^^^^^^^^^^^^^^^^^
File "/config/custom_components/eufy_security/eufy_security_api/web_socket_client.py", line 63, in _process_messages
async for msg in self.socket:
File "/usr/local/lib/python3.12/site-packages/aiohttp/client_ws.py", line 384, in anext
msg = await self.receive()
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/aiohttp/client_ws.py", line 315, in receive
msg = await self._reader.read()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/aiohttp/streams.py", line 685, in read
return await super().read()
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/aiohttp/streams.py", line 644, in read
await self._waiter
asyncio.exceptions.CancelledError
| gharchive/issue | 2024-12-02T01:56:36 | 2025-04-01T06:44:15.070390 | {
"authors": [
"asan2020"
],
"repo": "fuatakgun/eufy_security",
"url": "https://github.com/fuatakgun/eufy_security/issues/1263",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1096569100 | Eufy Doorbell
BTW Thanks for your help prior, much appreciated.
Now installed and I get a pic of the last event (I think).
However there is no live stream.
The camera I am using is the Eufy Wireless Doorbell and looking at the list of supported devices this isn't listed, is this correct? Also, the wireless doorbell I have found out does not support RSTP so am I correct in saying that it will not work.
Thanks again for your help.
Have got slightly further after going thru issues, and have installed WebRTC - however, when I try and create the cards it says 'No card type found' as below:
Any suggestions? Many thanks.
Closed - as it now works after I installed WebRTC and created the conditional cards. Groovy!
| gharchive/issue | 2022-01-07T18:36:21 | 2025-04-01T06:44:15.073485 | {
"authors": [
"geidorei"
],
"repo": "fuatakgun/eufy_security",
"url": "https://github.com/fuatakgun/eufy_security/issues/207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2393157916 | Navigation go and replace actions do not work reliably
So when using go and replace actions when navigating, two things happen:
extent changes but with delay
extent does not change at all
Versions:
smooth_sheets: 0.8.1
go_router: 14.2.0
![Video demonstrating this]
Blue screen has 1 proportional extent - 0.5
Red screen has 1 proportional extent - 0.9
Code
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'package:smooth_sheets/smooth_sheets.dart';
void main() {
runApp(MyApp());
}
final transitionObserver = NavigationSheetTransitionObserver();
class MyApp extends StatelessWidget {
MyApp({super.key});
final router = GoRouter(
initialLocation: "/blue",
routes: [
ShellRoute(
observers: [transitionObserver],
builder: (context, state, child) {
return NavigationSheet(
transitionObserver: transitionObserver,
child: Material(
color: Theme.of(context).colorScheme.primary,
borderRadius: BorderRadius.circular(16),
clipBehavior: Clip.antiAlias,
child: child,
),
);
},
routes: [
GoRoute(
path: "/blue",
pageBuilder: (context, state) {
return ScrollableNavigationSheetPage(
initialExtent: Extent.proportional(0.5),
minExtent: Extent.proportional(0.5),
maxExtent: Extent.proportional(0.5),
child: PageBlue(),
);
},
),
GoRoute(
path: "/red",
pageBuilder: (context, state) {
return ScrollableNavigationSheetPage(
initialExtent: Extent.proportional(0.9),
minExtent: Extent.proportional(0.9),
maxExtent: Extent.proportional(0.9),
child: PageRed(),
);
},
),
])
],
);
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: router,
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
);
}
}
class PageBlue extends StatelessWidget {
final List<String> items = List<String>.generate(20, (index) => "Item ${index + 1}");
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Blue (0.5)'),
backgroundColor: Colors.blue,
),
body: ListView.builder(
itemCount: items.length,
itemBuilder: (context, index) {
return InkWell(
onTap: () {
context.go("/red");
},
child: ListTile(
title: Text(items[index]),
),
);
},
),
);
}
}
class PageRed extends StatelessWidget {
final List<String> items = List<String>.generate(20, (index) => "Item ${index + 1}");
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Red (0.9)'),
backgroundColor: Colors.red,
),
body: ListView.builder(
itemCount: items.length,
itemBuilder: (context, index) {
return InkWell(
onTap: () {
context.go("/blue");
},
child: ListTile(
title: Text(items[index]),
),
);
},
),
);
}
}
Hi @samuelkubinsky,
I recommend specifying a key for each *Page in the go_router configuration, as the Navigator may reuse an existing Route object if possible, resulting in the transition animation not running.
pageBuilder: (context, state) {
return ScrollableNavigationSheetPage(
key: state.pageKey, // <-- this
Thanks that helped. I don't think this question belong here but I didn't find an answer anywhere. How can i get rid of "push" animation when replacing a route?
"push" animation when replacing a route?
I'm not sure what the push animation means. Can you post a screenshot or something?
You can specify a custom transition builder to *NavigationSheetPage.transitionsBuilder in the constructor. This page could be helpful to implement the transition builder.
How can i get rid of "push" animation when replacing a route?
I am not sure what you mean by "push animation". Can you post a screenshot or something?
Ok, my bad. Using default APIs context.go is treated with instant animation, but with this library i need to use context.replace which is not big deal, but if i use replace extent is not animated.
Code
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'package:smooth_sheets/smooth_sheets.dart';
void main() {
runApp(MyApp());
}
final transitionObserver = NavigationSheetTransitionObserver();
class MyApp extends StatelessWidget {
MyApp({super.key});
final router = GoRouter(
initialLocation: "/blue",
routes: [
ShellRoute(
observers: [transitionObserver],
builder: (context, state, child) {
return NavigationSheet(
transitionObserver: transitionObserver,
child: Material(
color: Theme.of(context).colorScheme.primary,
borderRadius: BorderRadius.circular(16),
clipBehavior: Clip.antiAlias,
child: child,
),
);
},
routes: [
GoRoute(
path: "/blue",
pageBuilder: (context, state) {
return ScrollableNavigationSheetPage(
key: state.pageKey,
initialExtent: const Extent.proportional(0.5),
minExtent: const Extent.proportional(0.5),
maxExtent: const Extent.proportional(0.5),
child: PageBlue(),
);
},
),
GoRoute(
path: "/red",
pageBuilder: (context, state) {
return ScrollableNavigationSheetPage(
key: state.pageKey,
initialExtent: const Extent.proportional(0.9),
minExtent: const Extent.proportional(0.9),
maxExtent: const Extent.proportional(0.9),
child: PageRed(),
);
},
),
])
],
);
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: router,
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
);
}
}
class PageBlue extends StatelessWidget {
final List<String> items = List<String>.generate(20, (index) => "Item ${index + 1}");
PageBlue({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('SmoothSheets Replace'),
backgroundColor: Colors.blue,
),
body: ListView.builder(
itemCount: items.length,
itemBuilder: (context, index) {
return InkWell(
onTap: () {
context.replace("/red");
},
child: ListTile(
title: Text(items[index]),
),
);
},
),
);
}
}
class PageRed extends StatelessWidget {
final List<String> items = List<String>.generate(20, (index) => "Item ${index + 1}");
PageRed({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('SmoothSheets Replace'),
backgroundColor: Colors.red,
),
body: ListView.builder(
itemCount: items.length,
itemBuilder: (context, index) {
return InkWell(
onTap: () {
context.replace("/blue");
},
child: ListTile(
title: Text(items[index]),
),
);
},
),
);
}
}
You can create custom transition animations by specifying a custom transition builder to *NavigationSheetPage.transitionsBuilder in the constructor. This page could be helpful to implement the transition builder.
Thanks for the tip. I tried some animations and I got them to work but I ultimately lost pop by user gesture. Wouldn't it be better if this library used default animations?
![Video]
(Ignore extents not animating in default implementation, I had to use DraggableScrollableSheet just for demonstration)
Can you tell me the expected behavior? Also i want to know the code of your transitionsBuilder.
I'm not sure which comment are you referring to, so I'm going to describe both issues.
1.) On context.replace invoke expected behaviour would be new screen replacing old one with no animation, but extent animating to initialExtent of new screen.
2.) Here I did not use any custom transitions. In Default Push and Default Pop I used DraggableScrollableSheet with MaterialPage to demonstrate expected screen transitions. In SmoothSheets Push and SmoothSheets Pop I used NavigationSheet with ScrollableNavigationSheetPage to demonstrate current screen transitions.
I tried some animations and I got them to work but I ultimately lost pop by user gesture.
Oh, sorry. I replied to the above comment.
Here I did not use any custom transitions. In Default Push and Default Pop I used DraggableScrollableSheet with MaterialPage to demonstrate expected (native) screen transitions. In SmoothSheets Push and SmoothSheets Pop I used NavigationSheet with ScrollableNavigationSheetPage to demonstrate current screen transitions.
You mean the expected behavior is that the sheet uses the native (material or cupertino) transition animation, while keeping the sheet height animation is enabled? If so, the following custom transition builder could be a solution (not tested yet):
final theme = Theme.of(context).pageTransitionsTheme;
return theme.buildTransitions<T>(ModalRoute.of(context) as PageRoute<dynamic>, context, animation, secondaryAnimation, child);
wouldn't it be better if this library used default animations? So out of the box it behaves as you would expect and if you want something custom, then you can tweak it to your liking.
I agree with this idea. I can't remember why I didn't so :(
Provided code works excellently, thanks! And I think this should be a default transition. So the only issue I'm having with this library is this one:
1.) On context.replace invoke, expected behaviour would be new screen replacing old one with no animation, but extent animating to initialExtent of new screen.
To be fair, extent is animated but only sometimes - after some manual scrolling in a screen.
![Video]
So the only issue I'm having with this library is this one:
https://github.com/fujidaiti/smooth_sheets/issues/188#issuecomment-2211964927 On context.replace invoke, expected behaviour would be new screen replacing old one with no animation, but extent animating to initialExtent of new screen.
Can't we simply use context.go? Or is there a reason it has to be replace?
go uses transition between pages whereas replace does not. In my app, I have bottom navigation bar from which I wanted to switch between sheet screens without transition, as this is default behaviour on iOS (UITabBarController).
I still think this could be fixed but I ultimately gave up on Cupertino transitions and started using ZoomPage on all platforms as it looks better in a sheet. Thanks for your wonderful work!
I will keep this issue open as a bug report, not a question.
| gharchive/issue | 2024-07-05T21:23:58 | 2025-04-01T06:44:15.116164 | {
"authors": [
"fujidaiti",
"samuelkubinsky"
],
"repo": "fujidaiti/smooth_sheets",
"url": "https://github.com/fujidaiti/smooth_sheets/issues/188",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
197621653 | _clearer is not a function.
Hi Guys, Currently I am using the latest version of firestack in my project.
Error:
clearer is not a function. (In '_clearer(id)', '_clearer' is undefined)
2016-12-26 13:39:48.954 [fatal]
The following is the screencap of the error when I click on the facebook sign in button.
https://cl.ly/2E2h2l1X1T2V
Any help here is greatly appreciated.
Thank You!
Use master now. The error is fixed there but a new release would be appreciated.
@arunshan @alexkendall check out the v3 branch, that's the cutting edge release, now mirrors firebase web api, much more stable and error free as well as lots of missing features added.
Hope to have a release for it soon ™️
If you'd like a release for it now, like an alpha or something rather than using git then let @auser know =] he can release an alpha on npm.
Closing as issue fixed in upcoming version.
👍 I'll pull this tomorrow (almost tomorrow already?) and update FirestackApp to go along with the new version.
| gharchive/issue | 2016-12-26T21:45:30 | 2025-04-01T06:44:15.162382 | {
"authors": [
"Salakar",
"alexkendall",
"arunshan",
"auser"
],
"repo": "fullstackreact/react-native-firestack",
"url": "https://github.com/fullstackreact/react-native-firestack/issues/199",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
198265953 | bug with Node compression module and require
wen i import compression module or if there is any module requires it, this is the error i get
var debug = require('debug')('compression')
^
TypeError: require(...) is not a function
Please, provide more details, can't reproduce
Please, provide more details, can't reproduce
i dont know how to offer more details, i am using Fuse with TypeScript Compiler, could it be the reason ?
also i went back to Fuse version 1.3.23 and problem does not exist
i dont know how to offer more details, i am using Fuse with TypeScript Compiler, could it be the reason ?
also i went back to Fuse version 1.3.23 and problem does not exist
same as in #54 this error goes away up till version v1.3.33
i hope it's fixed in 1.3.38
not yet :)
with 1.3.39 it is solved :)
| gharchive/issue | 2017-01-01T12:34:37 | 2025-04-01T06:44:15.223854 | {
"authors": [
"devmondo",
"nchanged"
],
"repo": "fuse-box/fuse-box",
"url": "https://github.com/fuse-box/fuse-box/issues/55",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
307226529 | Example template reports: A grid is using incompatible layout parameters[...]
This diagnostic message here https://github.com/fusetools/fuselibs-public/blob/27c7adf5bef14f909e154cdba12b88931fbd5cab/Source/Fuse.Controls.Panels/Layouts/GridLayout.uno#L1014 is reported when running https://github.com/fusetools/fuselibs-public/blob/master/ProjectTemplates/Projects/Example/
Also I'm not sure this should be reported as an error, since it's a common thing to do and was working fine before. In my opinion this should be counted as a warning instead.
Steps to reproduce
Open Fuse Studio, and click create new example application.
Navigate to the last page of the example project
Notice that a problem is reported under the problems tab
There were some small issues fixed in layout recently... has this been checked against master? The issues weren't with Grid itself, but with StackPanel and DockPanel, requesting an invalid sizing mode of the Grid, and thus generating this message.
It's an error because the GridLayout has actually detected an error (this isn't a heuristic and/or property check). It tracks certain layout conditions to detect that you've entered a mode it is known to produce the wrong and inconsistent sizes for. The grid in this example should work though, which means it's likely the aforementioned problem.
This was the issue where I fixed part of the problem: https://github.com/fusetools/fuselibs-public/issues/1082
I tested the code on master and it doesn't produce any errors (looking at it, I'm sure there's another recent issue I fixed related to it as well).
| gharchive/issue | 2018-03-21T12:34:44 | 2025-04-01T06:44:15.237327 | {
"authors": [
"Tapped",
"mortoray"
],
"repo": "fusetools/fuselibs-public",
"url": "https://github.com/fusetools/fuselibs-public/issues/1127",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1055143398 | Updated Dockerimage to fix CI
This PR attempts to get the CI up and running again and passing the tests.
Changes to the embree compile so that it works on AMD cpus without AVX, base docker image now includes nuclear data
In general this PR improves the main branch
these are necessary updates to work with the latest python packages
the docker image has been simplified
the CI is still not working but the tests pass locally and the CI is currently failing on the main branch
So I'm thinking it is best if this is merged and the CI fix is carried out in a subsequent PR
| gharchive/pull-request | 2021-11-16T17:32:54 | 2025-04-01T06:44:15.239218 | {
"authors": [
"shimwell"
],
"repo": "fusion-energy/fusion_neutronics_workflow",
"url": "https://github.com/fusion-energy/fusion_neutronics_workflow/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
77822342 | Need to support S3 region
Right now the app only supports us-east region
Closing in favor of #26. Because S3 doesn't really have region and I'm going to refactor the code to return the generated url of the object instead of forming the URL itself.
| gharchive/issue | 2015-05-18T22:04:24 | 2025-04-01T06:44:15.242599 | {
"authors": [
"noppanit"
],
"repo": "fusioneng/gif2html5-app",
"url": "https://github.com/fusioneng/gif2html5-app/issues/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
500997941 | workload: Allow my_vars dir location and minor SSH ansible fix
This PR does two things :
Allow the optional setting (-m) of my_vars.yml directory location, the default continues to be ../<OCP_VERSION>.x
This helps the Jenkins use case when there are multiple cluster deployments coming from the same mig-agnosticd cloned repo , i.e "4.1", "nightly"
Set default ansible SSH options to turn off strict checking. This is needed in order to fully automate these workloads without interactive prompts for host checks during playbook runs.
@pranavgaikwad @jwmatthews
LGTM
| gharchive/pull-request | 2019-10-01T16:07:48 | 2025-04-01T06:44:15.246147 | {
"authors": [
"fbladilo",
"pranavgaikwad"
],
"repo": "fusor/mig-agnosticd",
"url": "https://github.com/fusor/mig-agnosticd/pull/88",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
470872677 | Add CREDITS file by using gocredits
using gocredits: https://github.com/Songmu/gocredits
(japanese) https://songmu.jp/riji/entry/2019-04-16-gocredits.html
Done in by https://github.com/future-architect/gcp-instance-scheduler/pull/7
| gharchive/issue | 2019-07-22T03:18:07 | 2025-04-01T06:44:15.276473 | {
"authors": [
"laqiiz"
],
"repo": "future-architect/gcp-instance-scheduler",
"url": "https://github.com/future-architect/gcp-instance-scheduler/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
60735872 | Integration with Sonar (proper pull request)
I messed pull request https://github.com/futurice/mocha-jenkins-reporter/pull/17 so I had to create new one. Added some documentation and changed the way of replacing .js file extension.
I'll merge this first and make a release, thank you.
Thank you as well :)
Please make sure version 0.1.5 works for you.
I've just tested it in my project, works great. Thank you both :)
| gharchive/pull-request | 2015-03-11T21:50:29 | 2025-04-01T06:44:15.282161 | {
"authors": [
"bsodzik",
"jvah",
"mendlik"
],
"repo": "futurice/mocha-jenkins-reporter",
"url": "https://github.com/futurice/mocha-jenkins-reporter/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
806969763 | Issue Debugging
/comment example
/close
| gharchive/issue | 2021-02-12T05:57:12 | 2025-04-01T06:44:15.304370 | {
"authors": [
"fuxingloh"
],
"repo": "fuxingloh/oss-governance",
"url": "https://github.com/fuxingloh/oss-governance/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
467827492 | Fix an issue when exception in KotlinTextDocumentService#doLint
KotlinTextDocumentService#linting is set to true at the first of KotlinTextDocumentService#doLint and then set to false at end of there.
But if an exception (e.g. #42) is occurred, it is never set to false.
As a result, KotlinTextDocumentService#lintLater will not call KotlinTextDocumentService#doLint after that.
Looks good, thanks!
| gharchive/pull-request | 2019-07-14T11:55:13 | 2025-04-01T06:44:15.320181 | {
"authors": [
"fwcd",
"kuro46"
],
"repo": "fwcd/KotlinLanguageServer",
"url": "https://github.com/fwcd/KotlinLanguageServer/pull/121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
248304551 | Syncronization of Electrum Win64
I tried first private key with only one transaction before fork, and got BCH very quickly.
Then I added second private key with 80k+ transactions and used long time. More then five days app. syncronizing and still not done. I don't understand which block sycronizing now or how many hours left. Just see slow software network activity on control panel.
Where I can see progress? or how can I understand when sync finish?
Why first wallet synced very fast? This one transaction was in lasts block.
Thanks
i dont know
| gharchive/issue | 2017-08-07T04:25:55 | 2025-04-01T06:44:15.422628 | {
"authors": [
"Boyarov",
"fyookball"
],
"repo": "fyookball/electrum",
"url": "https://github.com/fyookball/electrum/issues/94",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1144818988 | Make mirroring work for more use cases
Add support for tuple structs, enum variants, wrapping in Vec and Optional.
There are still cases where it's not working, especially with Box (and sending back Vec).
It would also be nice to have it working for StreamSink but I'm not sure how to go about it.
Good job! Looking forward to merging the PR
I won't be working on this in the next few days, if you want to merge it to add what's already been done you can go ahead.
Either way, I'll work on supporting wrapping for more use cases in the near future since it should still be relevant whenever external crates can be resolved (since it would enable to read struct fields without mirroring but would still face the same issue against orphan rule).
Sure, I will merge it. Looking forward to your further PRs!
| gharchive/pull-request | 2022-02-19T18:51:27 | 2025-04-01T06:44:15.430335 | {
"authors": [
"Unoqwy",
"fzyzcjy"
],
"repo": "fzyzcjy/flutter_rust_bridge",
"url": "https://github.com/fzyzcjy/flutter_rust_bridge/pull/359",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2484203508 | Formatting is not stable
I'm getting this message on this following file:
https://github.com/horo-fox/horo.services/blob/main/src/pages/journal/[slug].astro
With this following config:
{
"excludes": ["**/node_modules", "**/*-lock.json"],
"plugins": [
"https://plugins.dprint.dev/typescript-0.91.6.wasm",
"https://plugins.dprint.dev/json-0.19.3.wasm",
"https://plugins.dprint.dev/markdown-0.17.6.wasm",
"https://plugins.dprint.dev/g-plane/malva-v0.9.1.wasm",
"https://plugins.dprint.dev/g-plane/markup_fmt-v0.12.0.wasm"
],
"indentWidth": 4
}
(I haven't minimized this cause there's a more important failure mode I'm encountering; I'll make a bug report for that first)
I suspect this is due to malva because when I remove that from the plugins list, this stops outputting like this. Let's see if I can reproduce with just that.
| gharchive/issue | 2024-08-24T04:40:34 | 2025-04-01T06:44:15.433391 | {
"authors": [
"horo-fox"
],
"repo": "g-plane/markup_fmt",
"url": "https://github.com/g-plane/markup_fmt/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
72153399 | infinite loop
probably don't run this code.
Git master
| gharchive/pull-request | 2015-04-30T13:11:50 | 2025-04-01T06:44:15.482445 | {
"authors": [
"MaxBlaushild",
"jamesstaub"
],
"repo": "ga-wdi-boston/wdi_1_git_quiz_script",
"url": "https://github.com/ga-wdi-boston/wdi_1_git_quiz_script/pull/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
137922201 | Delete user option in Dockstore UI
Users should be able to delete themselves from dockstore if they wish. This will have to remove all database information related to them.
This is a thing now
DockstoreAPI - 1.5.0-beta.2
DockstoreUI - 2.2.0-beta.0
| gharchive/issue | 2016-03-02T16:24:58 | 2025-04-01T06:44:15.492665 | {
"authors": [
"agduncan94",
"denis-yuen"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/issues/148",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
266182459 | Use Models and types
Feature Request
Desired behaviour
All functions should have a return type
There should be minimal any types
Use swagger generated models when possible
After the first two are complete, are there jslint config or something that can be used to enforce these?
Enforce the first one using:
"typedef": [
true,
"call-signature",
"property-declaration"
]
Currently 461 lint errors with it...fun
Linking to be done during #1731
| gharchive/issue | 2017-10-17T15:55:03 | 2025-04-01T06:44:15.494975 | {
"authors": [
"denis-yuen",
"garyluu"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/issues/965",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
174615742 | Allow use of yaml parameter files
Minor feature, #348, allows users (especially those coming from http://www.commonwl.org/v1.0/UserGuide.html#Wrapping_Command_Line_Tools ) to use yaml parameter files instead of json.
[x] Check that you pass the basic style checks and unit tests by running mvn clean install
Coverage decreased (-0.09%) to 73.754% when pulling 82d8641bec3e2a0152a6e1f79b802e61676b45f3 on feature/yaml_params into 4d052dd5d30ec2663da433e57130e03118af8788 on develop.
| gharchive/pull-request | 2016-09-01T20:17:18 | 2025-04-01T06:44:15.497915 | {
"authors": [
"coveralls",
"denis-yuen"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/pull/393",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
190126832 | Feature/verify for entry versions
Alters verification so that it is based on a version and not an entry overall.
[x] Check that you pass the basic style checks and unit tests by running mvn clean install
Coverage decreased (-0.4%) to 72.693% when pulling cfee45b9c46b268e7eb7a83c49b422b042190848 on feature/verify_for_entry_versions into 2567ff213926a75163daef607a967376c7a273b9 on develop.
| gharchive/pull-request | 2016-11-17T18:31:46 | 2025-04-01T06:44:15.499946 | {
"authors": [
"agduncan94",
"coveralls"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/pull/496",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1444233510 | 🛑 Revista is down
In 5159f55, Revista (https://pensamientopenal.com.ar/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Revista is back up in 8436b39.
| gharchive/issue | 2022-11-10T16:47:22 | 2025-04-01T06:44:15.502282 | {
"authors": [
"gabe50"
],
"repo": "gabe50/uptime",
"url": "https://github.com/gabe50/uptime/issues/525",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1445290927 | Test dynamic listener test for intermittent failures
Purpose
Describe the problems, issues, or needs driving this feature/fix and include links to related issues.
Fixes #
Approach
Describe how you are implementing the solutions along with the design details.
Samples
Provide high-level details about the samples related to this feature.
Remarks
List any other known issues, related PRs, TODO items, or any other notes related to the PR.
Check List
[ ] Read the Contributing Guide
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[ ] Added necessary tests
[ ] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[ ] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@3114f0f). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #4 +/- ##
=========================================
Coverage ? 76.31%
Complexity ? 53204
=========================================
Files ? 3404
Lines ? 200583
Branches ? 25844
=========================================
Hits ? 153072
Misses ? 38940
Partials ? 8571
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
| gharchive/pull-request | 2022-11-11T10:46:33 | 2025-04-01T06:44:15.509889 | {
"authors": [
"codecov-commenter",
"gabilang"
],
"repo": "gabilang/ballerina-lang",
"url": "https://github.com/gabilang/ballerina-lang/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
114486582 | Update gamecontrollerdb.txt
Add Support for the Xeox Pro gamepad.
Wii u Pro controller A-B-X-Y button order fixed.
@joaorb64 what's your take on the proposed Wii U Pro changes?
Game controllers have all kinds of patterns, numbers or letters on their buttons. But I personally think a game developer designs a game thinking of their actual layout, their position.
The Xbox controller buttons have the same letters as Nintendo's, but inverted. So it's a choice of whether you want the right layout or the right on-screen button letter.
Added the XEOX Pro gamepad.
| gharchive/pull-request | 2015-11-01T19:18:03 | 2025-04-01T06:44:15.524972 | {
"authors": [
"Somnium42",
"gabomdq",
"joaorb64"
],
"repo": "gabomdq/SDL_GameControllerDB",
"url": "https://github.com/gabomdq/SDL_GameControllerDB/pull/55",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
184897652 | adds coverage scripts
add coveralls.io badge
fixes #18
Changes Unknown when pulling 587aefd5c79354e1b295a8fccab34f7bd6e5a28d on fixes-#18 into ** on master**.
| gharchive/pull-request | 2016-10-24T16:59:07 | 2025-04-01T06:44:15.540944 | {
"authors": [
"coveralls",
"gabrielcsapo"
],
"repo": "gabrielcsapo/saywhat",
"url": "https://github.com/gabrielcsapo/saywhat/pull/19",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1274870898 | Check consistency of symmetries
Create an automatic check to answer the question
$$ \nabla_p S = 0 \Rightarrow \nabla_c S = 0 $$
cf lemma 4.2 p 88 of G Minton's thesis.
Some keywords: Palais' principle of symmetric criticality.
Rephrase constraints as a fixed point T(x) =x where T leaves the action invariant. S(T(x)) =S(x) for all x. If this is possible, then Newton checks for consistency of symmetry are useless.
Completed with diverse checks and #122
| gharchive/issue | 2022-06-17T10:38:21 | 2025-04-01T06:44:15.542886 | {
"authors": [
"gabrielfougeron"
],
"repo": "gabrielfougeron/choreo",
"url": "https://github.com/gabrielfougeron/choreo/issues/3",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
111523059 | Riot-router does not work in Chrome browser
It works in Firefox browser.
It does not work in Chrome browser.
Open example in both browsers http://plnkr.co/edit/OAnEIBi9SwYgTSQDeGp8?p=preview
@AlexRadch , raw.githubcontent is serving files with content-type text/plain.
just change from
<script src="https://raw.githubusercontent.com/gabrielmoreira/riot-router/master/lib/router.min.js"></script>
to
<script src="http://gabrielmoreira.github.io/riot-router/examples/node_modules/riot-router/lib/router.min.js"></script>
@gabrielmoreira, thank you!
I found https://rawgit.com/ and I just replace https://raw.githubusercontent.com/gabrielmoreira/riot-router/master/lib/router.min.js with https://rawgit.com/gabrielmoreira/riot-router/master/lib/router.min.js
| gharchive/issue | 2015-10-15T00:52:07 | 2025-04-01T06:44:15.545712 | {
"authors": [
"AlexRadch",
"gabrielmoreira"
],
"repo": "gabrielmoreira/riot-router",
"url": "https://github.com/gabrielmoreira/riot-router/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2329903511 | [nix] Add Githooks to nixpkgs
Right now the Githooks installation requires a custom flakes input. It'd be nice to have it in the default channel.
It is in the default channel, its in Nixpkgs already.
https://search.nixos.org/packages?channel=24.05&show=githooks&from=0&size=50&sort=relevance&type=packages&query=githooks
| gharchive/issue | 2024-06-02T23:00:50 | 2025-04-01T06:44:15.549146 | {
"authors": [
"gabyx",
"giggio"
],
"repo": "gabyx/Githooks",
"url": "https://github.com/gabyx/Githooks/issues/173",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
92909200 | how about redux support react 0.14 ?
i had try, but some error from InvariantJS.
Can you please paste the error? I have not tested it with 0.14 beta yet.
i had try and fix the some code:
redux/examples/todomvc/components/TodoTextInput.js
import ReactDOM , { findDOMNode } from 'react-dom';
...
handleChange(e) {
var input = this.refs.inputTodo;
this.setState({ text: input.value });
//this.setState({ text: e.target.value });
}
handleBlur(e) {
if (!this.props.newTodo) {
var input = this.refs.inputTodo;
this.props.onSave( input.value );
// this.props.onSave(e.target.value);
}
}
....
and
redux/examples/todomvc/index.js
import React from 'react';
import ReactDOM from 'react-dom';
import App from './containers/App';
import 'todomvc-app-css/index.css';
ReactDOM.render(
<App />,
document.getElementById('root')
);
the redux 1.0.0alpha work fine with react 0.14.0-beta1 / react-dom 0.14.0-beta1
so, close this issue.
| gharchive/issue | 2015-07-03T16:53:42 | 2025-04-01T06:44:15.557718 | {
"authors": [
"gaearon",
"tsingson"
],
"repo": "gaearon/redux",
"url": "https://github.com/gaearon/redux/issues/210",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2273182111 | Anseongjin
[안성진] #2 1,2,3,4번 문제
문제 링크
[Linked List Cycle] https://leetcode.com/problems/linked-list-cycle/description/
[Repeated DNA Sequnces] https://leetcode.com/problems/repeated-dna-sequences/description/
[Fraction to Recurring Decimal] https://leetcode.com/problems/fraction-to-recurring-decimal/description/
[LRU Cache] https://leetcode.com/problems/lru-cache/description/
시간 복잡도
예상 시간 복잡도 : [eg. O(n)]
설명:
1번: O(N)
2번: O(N^2)
3번: O(N)
4번: O(N)
아이디어
설명 : [아이디어]
1번: 인스턴스 참조값 주소를 해쉬테이블 키값으로 설정하고 다음 노드의 키를 비교하여 반복이 발생되는 여부 판단
2번: 10글자를 해쉬테이블 키값으로 설정하고 다음 10글자와 비교하여 중복 여부 판단
3번: 자릿수 나눗셈을 했을 때, 나머지를 해쉬테이블 키값으로 설정하고 다음 자릿수 나눗셈 나머지와 비교
4번: LRU 캐쉬 알고리즘 노드를 작성하고 해쉬테이블 사용, 이건 개선 필요함..
5번: 아직 해결방법 얻지 못했음.
자료구조
사용한 자료 구조 : HashTable
메모
1,2,3번 경우 복잡한 개념은 없었다. 하지만 4번 LRU 캐쉬 시 노드를 직접 설계하고 알고리즘을 작성해야 하는데, 쉽지 않았음.
5번은 좀 더 생각해봐야겠다.
LRU 노드 객체 설계 쉽지 않음 👎
| gharchive/pull-request | 2024-05-01T08:59:24 | 2025-04-01T06:44:15.581006 | {
"authors": [
"seongjin-an"
],
"repo": "gajago-algorithm-study/algorithm",
"url": "https://github.com/gajago-algorithm-study/algorithm/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1649371115 | chore: use ethers 2 by default when generating bindings
Motivation
no that we have auto-releases, we can switch to using crates dep by default for generated bindings.
we can also depend on ethers from crates in foundry from now
wdyt @DaniPopes (harcoding the current major release is probably the easiest solution)
cc @leonardoalt
Solution
PR Checklist
[ ] Added Tests
[ ] Added Documentation
[ ] Breaking changes
Yeah I think that's fine if we switch foundry off of git as well
| gharchive/pull-request | 2023-03-31T13:36:32 | 2025-04-01T06:44:15.586914 | {
"authors": [
"DaniPopes",
"mattsse"
],
"repo": "gakonst/ethers-rs",
"url": "https://github.com/gakonst/ethers-rs/pull/2317",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
181850124 | Strange behavior <div style="position: absolute; z-index: -10000; [...]> in Chrome.
Strange behavior <div style="position: absolute; z-index: -10000; [...]> in Chrome (v.53.0.2785.143 m).
If to collapse the bootstrap panel "Profile" in this page:
http://ukrainian-wife.com/en/user_profile/profile-f-419-en/
I see space after footer because this <div> does not update height: http://prntscr.com/crh5gs
If to disable SmoothScroll the <div> will disappear and all is OK. (v.1.4.0 & 1.4.4).
How can I fix it?
But FF & Edge - OK. And why <div> not present in FF & Edge?
it'll be fixed in the next version thanks
| gharchive/issue | 2016-10-08T21:35:52 | 2025-04-01T06:44:15.590211 | {
"authors": [
"artfulcat",
"galambalazs"
],
"repo": "galambalazs/smoothscroll-for-websites",
"url": "https://github.com/galambalazs/smoothscroll-for-websites/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1593748403 | Cell cycle feedback
Here are some changes
But we majorly need to look at the step from "Filter the cell cycle genes" onwards
@MarisaJL
| gharchive/pull-request | 2023-02-21T16:12:57 | 2025-04-01T06:44:15.644630 | {
"authors": [
"nomadscientist"
],
"repo": "galaxyproject/training-material",
"url": "https://github.com/galaxyproject/training-material/pull/3907",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1155268744 | SparseTensor support
as this python model example: How do i append the SparseTensor?
input as (indices,values,dense_shape)
Can you please clarify your question? If you have some code to share it would be better
| gharchive/issue | 2022-03-01T12:14:27 | 2025-04-01T06:44:15.645723 | {
"authors": [
"cavities",
"galeone"
],
"repo": "galeone/tfgo",
"url": "https://github.com/galeone/tfgo/issues/67",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
605802091 | Add some examples please
Hey.
Could you please provide some examples or at least one, to know how to use it with our sout command?
Thank you
I have added some examples to the README.md file, see https://github.com/galexrt/docker-vlc/blob/master/README.md#vlc-examples.
What do you think?
Awesome, Thanks a lot
| gharchive/issue | 2020-04-23T19:25:36 | 2025-04-01T06:44:15.647818 | {
"authors": [
"coder4x",
"galexrt"
],
"repo": "galexrt/docker-vlc",
"url": "https://github.com/galexrt/docker-vlc/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1518394871 | 🛑 Inventory API Endpoint is down
In 3fd8555, Inventory API Endpoint (https://inventory.roblox.com/v1/users/82738847/assets/collectibles?limit=10&sortOrder=Asc) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Inventory API Endpoint is back up in 7f5c048.
| gharchive/issue | 2023-01-04T06:23:32 | 2025-04-01T06:44:15.701274 | {
"authors": [
"gamer167"
],
"repo": "gamer167/StatusPlusUp",
"url": "https://github.com/gamer167/StatusPlusUp/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2141034438 | 🛑 Roblox Devforum is down
In 4f2ef0c, Roblox Devforum (https://devforum.roblox.com) was down:
HTTP code: 403
Response time: 185 ms
Resolved: Roblox Devforum is back up in 4acf0d1 after 12 minutes.
| gharchive/issue | 2024-02-18T15:15:26 | 2025-04-01T06:44:15.703907 | {
"authors": [
"gamer167"
],
"repo": "gamer167/StatusPlusUp",
"url": "https://github.com/gamer167/StatusPlusUp/issues/798",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
564684238 | need help
please help me
i have fixed the bug
| gharchive/issue | 2020-02-13T13:31:18 | 2025-04-01T06:44:15.710350 | {
"authors": [
"gamingwarrior94"
],
"repo": "gamingwarrior94/gaming-guys",
"url": "https://github.com/gamingwarrior94/gaming-guys/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2670267601 | Created basic unit tests using Vitest
Currently there are 6 testing suites with 22 total tests that test whether basic components exist in each of the project pages.
To run the testing suite, run the following commands:
cd front
npm test
The test suite should also run after every push/pull request
Nice!! thanks :)
Maybe it would be better if this branch was rebased off of my vitest one? I based that one off a few commits back so it doesn't have any of the extra babel/jest dependencies, I think this branch still has all that so our node_modules will be larger than needed
Also this is a nitpick, but in the future if you need to fix up some things you can also just git commit -a --amend --no-edit which just edits the most recent commit, so you don't have to add a lot of commits for each tiny fixup. I usually alias this to just git amend
Hey @gang21 I have created #66 just to set up vitests (which is rebased from a previous commit without babel/jest etc and thus doesn't have all of the bloat). I think it is easiest to merge that and then maybe you can cherry pick your tests onto that commit? Does that sound ok with you or do you think there is a better way to go about things?
@gang21 I have pushed up to this branch removing old dependencies (check diffstat! :tada: ) and run formatting / fixed the tester CI, all that is left now is to fix the tests. I was going to leave this to you since you know more about how to do this than me :) Looks like all tests pass except for two on resourcepage, then we can merge. Thanks for your patience!!
@gang21 How do you feel about commenting out the faililng resource page tests just to get CI green then we can fix them as a separate PR?
I agree, I commented out the failing tests for the resource page for now and will look into why that isn't working. Sorry for the inconvenience all!
I'm wondering if the tests are still failing because the fixed test case hasn't been merged in yet. Anyone have thoughts on this?
Right now it is failing because all package-lock.json files were removed. The commit squashes I applied yesterday basically fixed up everything but they seem to have been undone now :thinking: Did you have to force push these changes? If not I am not sure how my other commit was overridden. I can take a look soon I have a quiz at 4 sadly lol.
ack yea i might've done something when trying to merge changes sorry!
Nah no worries! Are you free to hop in a short call later tn maybe? Maybe some time after 7 thats when I get home
yes!
| gharchive/pull-request | 2024-11-18T23:28:34 | 2025-04-01T06:44:15.778492 | {
"authors": [
"gang21",
"ribru17"
],
"repo": "gang21/CS130-Capstone-Project",
"url": "https://github.com/gang21/CS130-Capstone-Project/pull/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
373055552 | Buggy fixedWidth scrolling
Issue description: I believe there is a bug in scrolling carousel on last page. It doesn't show every time a reload a page. It either doesn't scroll the last item fully visible (so it is cropped), or it sometimes add one more page, when I click "next" on last page (resulting in no movement and 5 pages in pager).
I guess it might be linked to padding on container as well - with padding, cropped last item can be seen. With no padding, pager updates to 5 on click. Sometimes the cache bugs on codepen, so be patient when changing the padding :)
Also the active "dot" in navigation is not always visible, sometimes it is hidden.
Demo link/slider setting: https://codepen.io/zipper/pen/YJJPQE
Tiny-slider version: 2.8.7, I don't have the issue on my project, when 2.6 is used
Browser name && version: Latest FF, Chrome
OS name && version: Win 10
The slider pager fails eg. when no padding is used, 10 items are present and only 8 are visible. Version 2.7.4 was ok.
I think we should check these issues one by one.
I just fixed an issue which will cause the last item not fully visible in some case.
| gharchive/issue | 2018-10-23T15:15:48 | 2025-04-01T06:44:15.791565 | {
"authors": [
"ganlanyuan",
"zipper"
],
"repo": "ganlanyuan/tiny-slider",
"url": "https://github.com/ganlanyuan/tiny-slider/issues/310",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2205763632 | 🛑 Gaplogic is down
In dc076c8, Gaplogic (https://www.gaplogic.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Gaplogic is back up in e27ed36 after 8 minutes.
| gharchive/issue | 2024-03-25T13:25:48 | 2025-04-01T06:44:15.826576 | {
"authors": [
"gaplogic"
],
"repo": "gaplogic/gaplogic.com",
"url": "https://github.com/gaplogic/gaplogic.com/issues/267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2591724336 | bump golangci-lint
bring it on par with gardener/gardener
What this PR does / why we need it:
unblock build, see verify task in head update job.
Special notes for your reviewer:
My assumption is that the version is controlled by renovate but it seems not to bump it yet. So until that's fixed I've opened this PR to unblock the build.
Release note:
NONE
/lgtm
| gharchive/pull-request | 2024-10-16T12:21:28 | 2025-04-01T06:44:15.838120 | {
"authors": [
"axel7born",
"domdom82"
],
"repo": "gardener/egress-filter-refresher",
"url": "https://github.com/gardener/egress-filter-refresher/pull/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
652292330 | unable to delete shoot when shoot failed to create
When shoot cluster failed to create, I cannot delete ths shoot cluster mannually.Please notice this.
This is a very generic statement and since a lot of steps are involved to create a shoot cluster which are also infrastructure/cloud provider dependent, the reasons for an unsuccessful deletion can be versatile.
I cannot delete ths shoot cluster mannually
Did you really try to clean-up the shoot and dependent resources manually yourself or did you instruct Gardener to delete the shoot again. What's the concrete issue?
I kindly ask you to provide more details about this issue, so that we can reproduce it.
For example:
At which step did the creation fail
At which step does the deletion fail
Infrastructure type
Versions of Gardener and involved extensions
Any details about your Gardener and Shoot like enabled feature gates, OS, K8s versions, ...
/status author-action
| gharchive/issue | 2020-07-07T12:44:19 | 2025-04-01T06:44:15.849247 | {
"authors": [
"ialidzhikov",
"knightXun",
"timuthy"
],
"repo": "gardener/gardener",
"url": "https://github.com/gardener/gardener/issues/2565",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1834611192 | Use eslint rule to detect unused classed with imports
We'd like to use https://docs.tss-react.dev/detecting-unused-classes in our project where we use tss-react.
When adding the eslint rule all our classes are marked as unused by the plugin because we use import the like this:
import { useStyles } from './Component.style';
const Component = () => {
const { classes } = useStyles();
return (<div className={classes.container}>styled div </div>)
}
Is there currently a way to support this usage?
Hello @michielmetcake,
I'm sorry but, no the plugin isn't smart enough for this 😕
Hello @michielmetcake,
I'm sorry but, no the plugin isn't smart enough for this 😕
I appreciate your reply. Thank you!
| gharchive/issue | 2023-08-03T09:04:31 | 2025-04-01T06:44:15.873738 | {
"authors": [
"garronej",
"michielmetcake"
],
"repo": "garronej/tss-react",
"url": "https://github.com/garronej/tss-react/issues/187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
789459953 | New design
Changes made in this version.
Added dark mode support.
Redesign of all views.
Added icon.
Small improvements in kotlin code.
Use of xml resources.
Thanks for your contribution ! @AlejandroFortt
| gharchive/pull-request | 2021-01-19T23:14:21 | 2025-04-01T06:44:15.884771 | {
"authors": [
"AlejandroFortt",
"gastsail"
],
"repo": "gastsail/CocktailApp",
"url": "https://github.com/gastsail/CocktailApp/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
281871970 | Restricting VPN profile creation to certain users
Hi! This may be a noob question. Is it possible to restrict create of vpn profiles to only certain users if using Google Authentication? I would like to have approved users be able to automatically have profiles created, but only approved users. Thanks!
This is currently possible for IPSec IKEv2 VPNs but not for OpenVPN.
I am closing this issues, unless you want this to be done with OpenVPN, we will prioritise that in that case.
| gharchive/issue | 2017-12-13T19:43:00 | 2025-04-01T06:44:15.886103 | {
"authors": [
"ajeygore",
"kingsly",
"nathanhinkle"
],
"repo": "gate-sso/gate",
"url": "https://github.com/gate-sso/gate/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1096091476 | OAuth with username/password
As a user, I want to be able to sign in to Gatekeeper with a username and password
So that I don't need a google account to access Gatekeeper
So that I can make a Gatekeeper account to track my passwords/subscriptions
setup commit - manual update
| gharchive/issue | 2022-01-07T08:21:08 | 2025-04-01T06:44:15.894504 | {
"authors": [
"rebeccamcfadden"
],
"repo": "gatekeeper-tamu/gatekeeper",
"url": "https://github.com/gatekeeper-tamu/gatekeeper/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1090463064 | I can't play game with ImmersivePortalsMod
Bug Description
The Alto Clef with the ImmersivePortalsMod in activating the portal, game is work. But I'm try to rejoin my save the game is collapse
Steps to Reproduce (as best as you can)
Install the Alto Clef Mod with Baritone Mod and ImmersivePortalsMod, Then activate the portal, You can see the protal is activated,is work. But in you rejoin or restart to save, The game is collapse
Expected Behavior
Actual Behavior
Crashlogs and Screenshots (if applicable)
(you may drag + drop text logs and images here, thanks GitHub)
crash-2021-12-28_19.31.01-client.txt
What does the immersive portals mod do?
Add a Breaks
#943
He just say:"I'm added breaks, next ver Fabric can display not compatible."
Holy ****
He just say:"I'm added breaks, next ver Fabric can display not compatible."
Holy ****
He in now reply me say:" Hope it's smaller. Closed due to duplication with #933.
So the Alto-Clef can or cannot fix this bug?
Immersive portals make massive changes to how nether portals work and I wasn't able to recognize an altoclef class from the bug report, so I don't think I can fix this one. It also could be a mod installation bug.
I will close this, if new information is found feel free to reopen if you can/make another report.
| gharchive/issue | 2021-12-29T11:41:53 | 2025-04-01T06:44:16.001308 | {
"authors": [
"JamesGreen31",
"adrisj7",
"ffffffffawd"
],
"repo": "gaucho-matrero/altoclef",
"url": "https://github.com/gaucho-matrero/altoclef/issues/154",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
202526501 | Nothing to see here.
Edit: Disregard everything, I suck cocks.
Don't worry, I have no problem with LGBT people and I'll never vote for a sexist president like the US just did. So you're safe here!
| gharchive/issue | 2017-01-23T12:55:20 | 2025-04-01T06:44:16.015835 | {
"authors": [
"TheReverend403",
"gawel"
],
"repo": "gawel/irc3",
"url": "https://github.com/gawel/irc3/issues/126",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
123446293 | Ruby's Time to Javascript's Date
Is it supposed to be converted? For me it's not working -- Time is converted to js string.
Is it supposed to be converted?
No, it is not. Converting sting is current behavior.
| gharchive/issue | 2015-12-22T09:27:15 | 2025-04-01T06:44:16.018283 | {
"authors": [
"takiy33",
"thehappycoder"
],
"repo": "gazay/gon",
"url": "https://github.com/gazay/gon/issues/200",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1095707011 | A section on LCD artifacts may be useful
This would cover things such as blur/smearing/etc, including simple 2-frame blending effects, but also more complex behaviors. One of the more complex behaviors is as follows.
ISSOtm noticed an unexpected artifact in The Forest Hero, as described here:
Go into the hole shown at the beginning, then go up and down the stairs
My DMG shows a bunch of vertical lines for some reason
Based on the belief that it was due to the brick tiles effectively toggling pixels between black/white on alternating frames I created the following test ROM. If you press A when this ROM is running the stripped tiles will essentially be xor'd every other frame, leading to a similar effect, which gets worse as there's more white in a given column.
It's not quite the same as the game because this is 50:50 white/black, but I believe it's the same effect.
(FLASHING WARNING: You probably don't want to press A with this ROM without frame blending enabled in an emulator, as it will flash very quickly.):
vertical_smear.zip
LIJI produced mockups of the resulting effect on their GB Light with normal contrast:
as well as high contrast:
I'm unaware of an emulator which attempts to emulate this behavior at this time.
144p Test Suite's "Motion blur" test likewise shows a fadeout on DMG and MGB if you turn on stripes.
| gharchive/issue | 2022-01-06T21:23:36 | 2025-04-01T06:44:16.029486 | {
"authors": [
"pinobatch",
"tbsp"
],
"repo": "gbdev/pandocs",
"url": "https://github.com/gbdev/pandocs/issues/387",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2242975035 | Update medaka models to include 4.3.0 models
Hi,
The medaka models used for hybracter only seem to pull v4.2.0 models, when my basecalling has been run using v4.3.0 models
I know that I likely don't need medaka (I turn it off) due to my quality scores, but the v4.3.0 models don't seem to be able to be specified by medakaModel
Additionally, updating minimap2 to use the lr:hq flag for Q20 nanopore reads would be nice (but not a massive improvement)
Thanks
Hi @samuelmontgomery ,
Thanks for these comments. With Medaka I have made the decision to deprecate it for newer models as a choice (as polishing is not be recommended on v4.3.0 SUP data or later). Therefore, I do not plan to update it past Medaka v1.8.0 inside Hybracter - this is the latest version that hasn't caused major grief with install. Therefore, I don't think you will be able to be download and specify these models - if you really want to I would recommend updating the specific Medaka environment to a newer version of Medaka inside hybracter and modifying the Hybracter code here to accept the model you want https://github.com/gbouras13/hybracter/blob/0acfb01454545116bac91e53db2b162d537f07f6/hybracter/util.py#L217 .
With minimap2, thanks for that - I would guess this is most useful for Plassembler (the step that uses minimap2 I think?). Maybe inside Flye too.
George
I would think that someone with v4.3.0 FAST or HAC data could just use v4.2.0 models reasonably well (though of course I'd just say to rebasecall with SUP!)
Closing in favour of #84
| gharchive/issue | 2024-04-15T07:56:40 | 2025-04-01T06:44:16.072371 | {
"authors": [
"gbouras13",
"samuelmontgomery"
],
"repo": "gbouras13/hybracter",
"url": "https://github.com/gbouras13/hybracter/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
525609187 | TypeError: body.filter is not a function since last thursday
Hi everybody,
since last thursday evening I get the following errors from the ewelink plugin:
Nothing has changed on the setup. The errors came out of nothing.
Any idea?
[2019-11-20 7:42:50] TypeError: body.filter is not a function
at /usr/local/lib/node_modules/homebridge-ewelink/index.js:364:37
at Object.parseBody (/usr/local/lib/node_modules/homebridge-ewelink/node_modules/request-json/main.js:74:12)
at Request._callback (/usr/local/lib/node_modules/homebridge-ewelink/node_modules/request-json/main.js:148:26)
at Request.self.callback (/usr/local/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:185:22)
at Request.emit (events.js:210:5)
at Request. (/usr/local/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:1161:10)
at Request.emit (events.js:210:5)
at IncomingMessage. (/usr/local/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:1083:12)
at Object.onceWrapper (events.js:299:28)
at IncomingMessage.emit (events.js:215:7)
at endReadableNT (_stream_readable.js:1183:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
I have the same issue
This is causing the homebridge to crash
Nov 28 20:42:50 raspberrypi homebridge[12796]: [11/28/2019, 8:42:50 PM] [eWeLink] An error was encountered while requesting a list of devices while interrogating power status. Verify your configuration options. Response was [{"error":400,"msg":"params incomplete"}]
Nov 28 20:42:50 raspberrypi homebridge[12796]: [11/28/2019, 8:42:50 PM] TypeError: body.filter is not a function
Nov 28 20:42:50 raspberrypi homebridge[12796]: at /usr/lib/node_modules/homebridge-ewelink/index.js:364:37
Nov 28 20:42:50 raspberrypi homebridge[12796]: at Object.parseBody (/usr/lib/node_modules/homebridge-ewelink/node_modules/request-json/main.js:74:12)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at Request._callback (/usr/lib/node_modules/homebridge-ewelink/node_modules/request-json/main.js:148:26)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at Request.self.callback (/usr/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:187:22)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at Request.emit (events.js:193:13)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at Request. (/usr/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:1044:10)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at Request.emit (events.js:193:13)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at IncomingMessage. (/usr/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:965:12)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at IncomingMessage.emit (events.js:198:15)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at endReadableNT (_stream_readable.js:1139:12)
Nov 28 20:42:50 raspberrypi homebridge[12796]: at processTicksAndRejections (internal/process/task_queues.js:81:17)
Nov 28 20:42:50 raspberrypi homebridge[12796]: [11/28/2019, 8:42:50 PM] Got SIGTERM, shutting down Homebridge...
Nov 28 20:42:55 raspberrypi systemd[1]: homebridge.service: main process exited, code=exited, status=143/n/a
It seems that the protocol was changed on the server side
| gharchive/issue | 2019-11-20T06:46:34 | 2025-04-01T06:44:16.085567 | {
"authors": [
"imsturmdernacht",
"migabc"
],
"repo": "gbro115/homebridge-ewelink",
"url": "https://github.com/gbro115/homebridge-ewelink/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
229591392 | A type inference pattern
The problem
The current implementation of foldMap has some type inference issues
import * as option from 'fp-ts/lib/Option'
import * as array from 'fp-ts/lib/Array'
import { monoidSum } from 'fp-ts/lib/Monoid'
import { foldMap } from 'fp-ts/lib/Foldable'
const monoid = option.getStaticMonoid(monoidSum)
const as = [1]
// x1 :: Option<{}> // <= {} instead of number
const x1 = foldMap(array, monoid, a => option.some(a), as)
Here TypeScript infers {} instead of number!
The solution
If I curry foldMap, the result is way better
export function foldMap$<F extends HKTS, M>(foldable: StaticFoldable<F>, monoid: StaticMonoid<M>): <A>(f: (a: A) => M, fa: HKT<A>[F]) => M {
return <A>(f: (a: A) => M, fa: HKT<A>[F]) => foldable.reduce((acc, x: A) => monoid.concat(f(x), acc), monoid.empty(), fa)
}
// x2 :: Option<number>
const x2 = foldMap$(array, monoid)(a => option.some(a), as)
The pattern
After some trials and errors, a clean pattern seems to emerge: whenever in PureScript there is a type constraint, in fp-ts we need a curried function accepting the static dictionaries as first arguments
PureScript
foldMap :: forall f a m. Foldable f, Monoid m => (a -> m) -> f a -> m
TypeScript
function foldMap<F extends HKTS, M>(foldable: StaticFoldable<F>, monoid: StaticMonoid<M>): <A>(f: (a: A) => M, fa: HKT<A>[F]) => M
This is good news: in general porting PureScript libraries to fp-ts is pretty straightforward (with the possible exception of highly polymorphic signatures)
Alas there are many functions in fp-ts which I wrote when I was still learning TypeScript in February/March and before I was aware of this pattern, for example
Functor/lift
Foldable/toArray
Apply/liftA2
Apply/liftA3
Apply/liftA4
etc..
I'd like to change the current implementation but it's obviously a breaking change. I see 2 options
(1) brutal braking changes releasing v0.3
(2) migration path from 0.2 to 0.3 via deprecations
An example of (2) would be to define a curried foldMap$ as above and just deprecate foldMap. New code should use foldMap$ instead of foldMap so it will almost ready for the upgrade to v0.3 (should be a simple replace). Not sure if it worth it though
A Con of (2) is a more confusing code base, a Pro is that it leaves some time to figure out if there will be other breaking changes affecting other parts of fp-ts.
@OliverJAsh @sledorze @danielepolencic @leemhenson sorry to tag you all but being "real world" users I'd love to hear from you what you think
I'm fine with (1). It's still early days and I don't mind the breaking changes.
Also, I'll take the opportunity to ask a question.
If we decide for:
(1) brutal braking changes releasing v0.3
Is there anything like https://github.com/facebook/jscodeshift for Typescript?
Breaking changes is ok with me.
... leaves some time to figure out if there will be other breaking changes affecting other parts of fp-ts.
Maybe leave 0.3 in pre for a while until this is bottomed out though?
Is there anything like https://github.com/facebook/jscodeshift for Typescript?
That would be a great, AFAIK there's no such a tool. Perhaps it will be available after the work on prettier and typescript-eslint-parser is settled down
PR here https://github.com/gcanti/fp-ts/pull/93
Maybe leave 0.3 in pre for a while until this is bottomed out though?
I will publish a version in the next channel (npm install fp-ts@next) so we can check that everything is ok before actually releasing
@gcanti breaking changes fine with me too!
Relevant https://github.com/gcanti/fp-ts/pull/93#issuecomment-303172449
Hi all, I put up a branch (dev) with the lib folder committed in, please give it a spin on your codebase / lib and let me know if there are unexpected issues
npm i gcanti/fp-ts#dev
Pre-released v0.3 as fp-ts@next
FYI here's the change log so far
# 0.3.0
- **New Feature**
- add `StateT` monad transformer, closes #104 (@gcanti)
- add `Store` comonad, closes #100 (@rilut)
- add `Last` monoid, closes #99 (@gcanti)
- add `Id` monad (@gcanti)
- Array: add extend instance (@gcanti)
- NonEmptyArray: add comonad instance (@gcanti)
- `examples` folder
- `exercises` folder
- **Polish**
- Tuple: remove StaticFunctor checking (@rilut)
- **Breaking Change** (@gcanti)
- required typescript version: **2.3.3**
- drop `Static` prefix in type classes
- Change contramap signature, closes #32
- Validation: remove deprecated functions
- Foldable/toArray
- Dictionary/fromFoldable
- Dictionary/toUnfoldable
- Profunctor/lmap
- Profunctor/rmap
- Unfoldable/replicate
- compositions: renaming and signature changes
- `getFunctorComposition` -> `getCompositionFunctor`
- `getApplicativeComposition` -> `getCompositionApplicative`
- `getFoldableComposition` -> `getCompositionFoldable`
- `getTraversableComposition` -> `getCompositionTraversable`
- `OptionT`, `EitherT`, `ReaderT` refactoring
- drop `IxMonadT`, move `IxIO` to the `examples` folder
- drop `Trans` module
- `Free` refactoring
- drop `rxjs` dependency
- drop `lib-jsnext` folder
- make `None` constructor private
- remove `Pointed` and `Copointed` type classes
| gharchive/issue | 2017-05-18T08:24:11 | 2025-04-01T06:44:16.116298 | {
"authors": [
"danielepolencic",
"gcanti",
"leemhenson",
"sledorze"
],
"repo": "gcanti/fp-ts",
"url": "https://github.com/gcanti/fp-ts/issues/91",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
118549737 | Fix some bugs
Added testcases for found bugs:
incorrect Item.step if step value more than numeric_limits
TEST_F(SequenceParserTest, bigStep)
2.incorrect restore file name from sequence (file_16 - restored as file_6)
TEST_F(SequenceParserTest, test8_10_16)
3.incorrect create sequence for case file_2, file_3, file_4, file_100, file_101, file_102
TEST_F(SequenceParserTest, disconnectedSequence2)
added fixes for that testcases.
@gchatelet I added fixes according comments
So first of all, thx a lot for the pull request. In general I'd like pull request to be limited in scope.
Can you split it ? To me, the most important part is the one that adds the unit tests.
If you don't have time I can do it and integrate your change in master.
It will be very nice if you could to finish this task.
A new version is available in master which should be more robust and faster.
I'm closing this pull request but please open bugs against me if you find weird behaviors.
| gharchive/pull-request | 2015-11-24T07:31:02 | 2025-04-01T06:44:16.140256 | {
"authors": [
"ViacheslavYartsev",
"gchatelet"
],
"repo": "gchatelet/light_sequence_parser",
"url": "https://github.com/gchatelet/light_sequence_parser/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
152988769 | Tidy up module names
There are now a lot of top level modules. It would be clearer to move some of these into parent modules such as:
core
rest-api
store-implementation
accumulo-store
array-list-store
integration-test
library
function-library
operation-library
serialiser-library
example
example-graph
example-rest
this should be done after gh-172.
This should also fix gh-172.
Merged into develop.
in merge, some files from "example" (now "example-graph") and gaffer-integration-tests (now "integration-test") did not get moved. Issue reopened to resolve this.
Merged into develop
Migration steps
The example module has been renamed to example-graph
| gharchive/issue | 2016-05-04T11:25:25 | 2025-04-01T06:44:16.144953 | {
"authors": [
"ak8532110",
"p013570",
"t616178"
],
"repo": "gchq/Gaffer",
"url": "https://github.com/gchq/Gaffer/issues/182",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2000485861 | 🛑 Numaterra is down
In 3231a09, Numaterra ($NUMATERRA) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Numaterra is back up in afc91c9 after 8 minutes.
| gharchive/issue | 2023-11-18T16:28:34 | 2025-04-01T06:44:16.162454 | {
"authors": [
"louisgcom"
],
"repo": "gcommeuneidee/upptime",
"url": "https://github.com/gcommeuneidee/upptime/issues/698",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
653605545 | Can someone help me please? I got this error on netfly-> Command failed with exit code 255: hugo
Welcome to Academic's GitHub repo 👋
We use GitHub only for bug reports and feature requests - it's our project management tool.
For help and questions, please join our community chat or use the forum 🚑.
Also, you can search and browse the extensive Academic and Hugo documentation.
For questions on Blogdown, please reach out to the Blogdown community.
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
See error '...'
Expected behavior
A clear and concise description of what you expected to happen.
Technical details:
Academic Version:
Hugo Version:
Browser/OS:
If applicable, add screenshots to help explain the issue.
For support, see https://github.com/gcushen/hugo-academic/blob/master/.github/support.md
Once you're in the support channel, it would be helpful if you can link to your Github repo and describe all of the steps you went through to reproduce the issue.
| gharchive/issue | 2020-07-08T21:22:02 | 2025-04-01T06:44:16.189940 | {
"authors": [
"Bortogit",
"gcushen"
],
"repo": "gcushen/hugo-academic",
"url": "https://github.com/gcushen/hugo-academic/issues/1775",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
346686753 | Improve this page / Edit this page
Is there a possibility to add a button with a link to the corresponding GitHub file? People could PR typos etc. This would make cooperation a lot easier!
This exists in the customized version of Academic which powers the documentation site.
If we move it into the core Academic, should it appear in the same place - next to the title?
Thank you for considering this enhancement! Yes, I have already seen the "Edit"-button in your customized version of Academic.
Where and how should the button appear? Hmmm, I would prefer that it would not so obtrusive. (In your page it looks not too obtrusive as you have still on the right sight the table of content.)
Maybe a small icon like in the bookdown/gitbook books e.g. here? Preferred places are top right or bottom right of the site (e.g. here or here). The last one (bottom) only if one could always see this button on the screen.
You are using in your customized version a link in the menu to your GitHub repo. Would it be feasible to use this same link in the menu, but not to main page of the repo but to the specific page (whenever you are on a specific page)?
But concerning the design you are the expert! I would be happy to have this functionality wherever this button appears.
Another issue: Instead of 'Edit' I would prefer other text, e.g. "Improve this page". I am pretty new to this community of programmer and designer and 'Edit this page' was not really addressing me, as I thought I have to contribute with arguments and sophisticated content. "Improve this page" has a much smaller threshold as reporting typos is also included. Maybe you could make text and icon customizable?
But again: For me counts mainly the functionality. I am not an expert for UX.
| gharchive/issue | 2018-08-01T17:02:36 | 2025-04-01T06:44:16.193856 | {
"authors": [
"gcushen",
"petzi53"
],
"repo": "gcushen/hugo-academic",
"url": "https://github.com/gcushen/hugo-academic/issues/617",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
273647960 | Making Abstracts Optional
The simple idea behind this PR is that there is no point in showing the abstract header if there is no actual abstract. It therefore makes the abstract optional. Particularly new users who have already some publications might benefit from this. It simplifies the setup of a nice looking site while allowing for adding abstracts later on.
@gcushen Please take a look.
@igilitschenski thanks for your contribution! This commit also makes publication abstract behave consistently with talk abstract.
| gharchive/pull-request | 2017-11-14T02:37:08 | 2025-04-01T06:44:16.195334 | {
"authors": [
"gcushen",
"igilitschenski"
],
"repo": "gcushen/hugo-academic",
"url": "https://github.com/gcushen/hugo-academic/pull/382",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1305021127 | Tweaks for 5
Some tweaks requested by @landreev - near the finishing touches to the 5 release!
Closes #56
Closes #57
This looks great! Are you planning to address #61 here, or in a different PR?
Yeah, this is a small thing only. I think it's ok to include here, the place isn't crowded yet 🤗
Yay! 🎉
| gharchive/pull-request | 2022-07-14T16:27:03 | 2025-04-01T06:44:16.196954 | {
"authors": [
"landreev",
"poikilotherm"
],
"repo": "gdcc/xoai",
"url": "https://github.com/gdcc/xoai/pull/60",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
76406017 | Add GDG Bingham University to Zeppelin List
We just completed our GDG website and we are happy to announce our adoption of Project Zeppelin.
It is really interesting to see that you are using Zeppelin for a chapter website. Really cool @gdgbhu
| gharchive/pull-request | 2015-05-14T16:04:38 | 2025-04-01T06:44:16.199649 | {
"authors": [
"gdgbhu",
"tasomaniac"
],
"repo": "gdg-x/zeppelin",
"url": "https://github.com/gdg-x/zeppelin/pull/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2479692140 | 🛑 Rustpad is down
In 042eba5, Rustpad ($SITE_RUSTPAD) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Rustpad is back up in 43bf3d1 after 14 minutes.
| gharchive/issue | 2024-08-22T03:37:00 | 2025-04-01T06:44:16.202742 | {
"authors": [
"gdm257"
],
"repo": "gdm257/upptime",
"url": "https://github.com/gdm257/upptime/issues/274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
713221451 | Move some modules (vdm, verdict-bundle-parent)
Copy com.ge.research.osate.verdict.vdm from tools/verdict to
tools/verdict-data-model.
Change com.ge.research.osate.verdict.vdm (we still need it) to
simply bundle vdm and its transitive dependencies into an Eclipse
bundle that the other verdict bundles can use.
Change tools/verdict-back-ends/verdict-bundle-parent projects to
use verdict-data-model's new Maven coordinates instead of
com.ge.research.osate.verdict.vdm's Maven coordinates.
Move tools/verdict-back-ends/verdict-bundle-parent projects up one
directory to tools/verdict-back-ends and remove empty
verdict-bundle-parent directory.
Update build instructions in tools/verdict/back-ends/README.md.
If you have an Eclipse workspace, you will need to remove and reimport
all the Verdict projects in order to update your workspace. Don't
forget to run mvn clean install -Dtycho.localArtifacts=ignore in the
tools directory before reimporting the projects.
With this VDM move, if we create a completely new project that depends on VDM, what does user need to do?
With this VDM move, if we create a completely new project that depends on VDM, what does user need to do?
Assuming the user is creating a Maven project and he wants to use VDM.
Also, I noticed that you list all the modules (originally in the verdict-bundle-parent) in the verdict-back-end repo. I misunderstood what you said on Slack. I still think it is better to group original modules in the bundle project inside one Maven project since we use the entire project as a standalone jar.
After this VDM move, if you want to create a Maven project and use VDM in the project, you will put a single dependency in your pom.xml:
<dependency>
<groupId>com.ge.verdict</groupId>
<artifactId>verdict-data-model</artifactId>
</dependency>
The instructions will be different if you are creating an Eclipse plugin project; then you need to put a Require-Bundle: com.ge.research.osate.verdict.vdm in your MANIFEST.MF.
Some advantages to moving the projects are:
The verdict-bundle-parent directory wasn't a very good name. I like verdict-back-ends better.
Since Maven ignores the STEM and OCaml projects, we benefit nothing from having the verdict-bundle-parent directory.
We won't have to cd one level deeper in the terminal or see two nested "project of projects" projects in our Eclipse project (both verdict-back-ends and verdict-bundle-parent).
Logically and in truth, the verdict-bundle app invokes all the back-end programs, not just the Java programs. It just runs the non-Java programs in subprocesses while it calls the Java programs directly.
There's only one README to read (verdict-back-ends/README.md) for back-end build instructions now, not two READMEs.
Both Vidhya and Daniel liked the idea and said okay already.
Some advantages to moving the projects are:
The verdict-bundle-parent directory wasn't a very good name. I like verdict-back-ends better.
Since Maven ignores the STEM and OCaml projects, we benefit nothing from having the verdict-bundle-parent directory.
We won't have to cd one level deeper in the terminal or see two nested "project of projects" projects in our Eclipse project (both verdict-back-ends and verdict-bundle-parent).
Logically and in truth, the verdict-bundle app invokes all the back-end programs, not just the Java programs. It just runs the non-Java programs in subprocesses while it calls the Java programs directly.
There's only one README to read (verdict-back-ends/README.md) for back-end build instructions now, not two READMEs.
Both Vidhya and Daniel liked the idea and said okay already.
I have no objection against the naming change from verdict-bundle-parent to verdict-back-ends.
The way I see the the modules inside original verdict-bundle-parent is that they are all written in Java and should belong to the same Java Maven project. Now it seems like they are mixed with STEM and OCaml projects, which make the repo look really unclean.
To me the entire bundle project is the middle layer of the VERDICT project. The real back-ends are Soteria++ and Kind2.
Conceptually, the entire bundle project is a standalone Java Maven project. Each of other projects (aadl2iml, Soteria++) is a standalone OCaml project. STEM is just one of the inputs to the bundle project. If we import the "original bundle project" from verdict-back-ends repo into Eclipse, would the OCaml project be imported into Eclipse as well? If yes, we should avoid doing that since those OCaml projects are not part of bundle project.
I agree that one single README is cleaner. We can still keep that one at that level but need to change the wording a bit saying all the java maven modules are under the bundle project (or you can give a more informative and meaningful name), and show the build instruction for the bundle at that level as you already did.
I am sorry about this confusion. I know it would cost you extra efforts to make the changes. But I think logically the Java Maven project modules should reside separate from the OCaml projects.
Do you want to consider moving the STEM and OCaml projects instead of the Java Maven projects? That would minimize the rework time since I won't have to change as many pom files. However, I have a hard time thinking of logical names for their new locations that would be an improvement over either 1) keeping all the back-end projects in the same directory as I propose, or 2) making verdict-bundle the middle layer of the VERDICT project as you want.
To answer your question about Eclipse, you were importing the entire tools directory into Eclipse already. If you expand your tools project in your Eclipse workspace, you will see verdict and verdict-back-ends directories even though they also exist as their own projects in the workspace. Likewise, if you expand your verdict-back-ends project, you will see STEM, aadl2iml, and soteria_pp directories as well as verdict-bundle-parent even though your workspace doesn't have separate projects for them, only for verdict-bundle-parent. That's what still would happen if all the back-end projects were in the same directory (verdict-back-ends). You would see verdict-back-ends as a project and the rest of the Java Maven projects as projects but you would never see STEM, aadl2iml, or soteria_pp unless you expanded your verdict-back-ends project.
If you still want me to go back to the structure we had before, I will make the following two changes:
Change tools/verdict-back-ends/verdict-bundle-parent to tools/verdict-back-ends/verdict-bundle
Change tools/verdict-back-ends/verdict-bundle-parent/verdict-bundle to tools/verdict-back-ends/verdict-bundle/verdict-bundle-app
You will see tools, verdict, verdict-back-ends, and verdict-bundle as module projects in your workspace (they will contain only directories, not source files) and the rest of the Java Maven projects as source projects (they will have Java files).
I prefer the following suggestion to group all the bundle modules inside one Java Maven project as you proposed below. The rest remains unchanged. You can still keep the README inside the verdict-back-ends repo.
If you still want me to go back to the structure we had before, I will make the following two changes:
Change tools/verdict-back-ends/verdict-bundle-parent to tools/verdict-back-ends/verdict-bundle
Change tools/verdict-back-ends/verdict-bundle-parent/verdict-bundle to tools/verdict-back-ends/verdict-bundle/verdict-bundle-app
FYI, I haven't been able to fully test the changes because I've been having some DNS problems inside my WSL build environment after switching from WSL1 to WSL2. I'll have to merge the pull request after the checks pass, let CI update the docker image and update site, and then use them to test the changes. Please help test the changes if you can too.
Actually I also have a Cygwin build environment so I've built the plugin and tested it in my OSATE now. soteria_pp crashes for some reason but otherwise the translators seem to work. I'll merge the PR and let others test too.
| gharchive/pull-request | 2020-10-01T22:20:27 | 2025-04-01T06:44:16.223017 | {
"authors": [
"baoluomeng",
"tuxji"
],
"repo": "ge-high-assurance/VERDICT",
"url": "https://github.com/ge-high-assurance/VERDICT/pull/88",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1174451982 | credential_publickey DER and PEM now empty
I think this change: https://github.com/gebogebogebo/ctap-hid-fido2/commit/3fb2de925df4ae0bdb3570dad91b22f8027762eb#diff-e8afd4d5ffde3d3d24c8c765790dc6bec1108ab9aaf4fd53df8c9a4be0bbe3f7L83
broke having the credential public key data available. I think it's still possible to work around it because it should be in the attestation statement but I just think this parsing is failing or doing something slightly wrong.
I might have some time to look into it over the next few days but maybe it's obvious to @gebogebogebo :)
I know there is a problem with the Attestation public key generated being empty when registered with a key type of Ed25519.
There is an unknown part of the logic for converting COSE format public keys to DERs that has not been implemented.
There is an unknown part of the logic for converting COSE format public keys to DERs that has not been implemented.
If the logic of the following link can be filled in, it can be addressed.
https://github.com/gebogebogebo/ctap-hid-fido2/blob/8310d49cfd0b36de441ab1a6dd4a5477fc29d817/src/cose.rs#L128-L133
If the logic of the following link can be filled in, it can be addressed !
Understood, I can look at it in the coming days :)
In the mean time you can see how I get around the issue and extract the public key from the attestation data here: https://github.com/obelisk/sshcerts/blob/new_fido2/src/fido/parsing.rs#L107-L163
Thanks again for writing this library, I want to help anyway I can!
Thank you as always !!!
Resolved in 3.5.0
| gharchive/issue | 2022-03-20T06:20:30 | 2025-04-01T06:44:16.272164 | {
"authors": [
"gebogebogebo",
"obelisk"
],
"repo": "gebogebogebo/ctap-hid-fido2",
"url": "https://github.com/gebogebogebo/ctap-hid-fido2/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1923450299 | 🛑 COM GURU is down
In a6a60eb, COM GURU ($SITE_COM_GURU) was down:
HTTP code: 0
Response time: 0 ms
Resolved: COM GURU is back up in d34b41e after 13 minutes.
| gharchive/issue | 2023-10-03T06:57:58 | 2025-04-01T06:44:16.287285 | {
"authors": [
"gedesco-git"
],
"repo": "gedesco-git/uptime",
"url": "https://github.com/gedesco-git/uptime/issues/472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1977404830 | 🛑 COM GURU is down
In c7d6f62, COM GURU ($SITE_COM_GURU) was down:
HTTP code: 0
Response time: 0 ms
Resolved: COM GURU is back up in f342c1d after 8 minutes.
| gharchive/issue | 2023-11-04T15:47:50 | 2025-04-01T06:44:16.289446 | {
"authors": [
"gedesco-git"
],
"repo": "gedesco-git/uptime",
"url": "https://github.com/gedesco-git/uptime/issues/566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2236370617 | 🛑 COM GURU is down
In 2636f85, COM GURU ($SITE_COM_GURU) was down:
HTTP code: 0
Response time: 0 ms
Resolved: COM GURU is back up in 9cd4c6d after 9 minutes.
| gharchive/issue | 2024-04-10T20:25:22 | 2025-04-01T06:44:16.291793 | {
"authors": [
"gedesco-git"
],
"repo": "gedesco-git/uptime",
"url": "https://github.com/gedesco-git/uptime/issues/952",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1172175852 | گزارش خطا
یه سیستمی می خوام که اگر هرجایی خطایی داشتیم بهم پیام بده.
الان وقتی که از telegram helper استفاده می کنیم برای ارسال پیام. اگر خطایی باشه اصلا متوجه نمی شیم.
باید یه سیستم ذخیره خطا و همینطور گزارشش باشه.
هر اروری که توی سیستم رخ بده برای تلگرام من می فرستم ( البته مطمئن نیستم در مورد همه ارور ها. )
در مورد لاگ کردن ارور ها هم این کار رو می سپرم به nginx .
| gharchive/issue | 2022-03-17T10:09:20 | 2025-04-01T06:44:16.339330 | {
"authors": [
"geeksesi"
],
"repo": "geeksesi/guess_emoji_telegram_bot",
"url": "https://github.com/geeksesi/guess_emoji_telegram_bot/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
258157082 | fix multiple disk issue
added support for multiple disks to fix #35 :^)
the original line actually fails normally for me, this is a good fix!
I generally get this output:
$ ./prepare-iso.sh
xargs: illegal option -- -
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
[-L number] [-n number [-x]] [-P maxprocs] [-s size]
[utility [argument ...]]
| gharchive/pull-request | 2017-09-15T20:35:01 | 2025-04-01T06:44:16.373808 | {
"authors": [
"dotCipher",
"yslgirl"
],
"repo": "geerlingguy/macos-virtualbox-vm",
"url": "https://github.com/geerlingguy/macos-virtualbox-vm/pull/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1880593887 | Soulreaper axe not working
Soulreaper axe can be visually replaced by other items, but the attack animation stays the same. Stand/Move animations work perfectly fine. I'm guessing it's because of the new attack animation, like for fang in https://github.com/geheur/weapon-animation-replacer/issues/29.
Love your work!
[[Do you happen to know what animations it uses? both for attack/defend and stand/move.
I don't have membership and I don't have the axe.](https://github.com/runelite/plugin-hub/pull/4847)](https://github.com/runelite/plugin-hub/pull/4847)
| gharchive/issue | 2023-09-04T16:21:50 | 2025-04-01T06:44:16.393265 | {
"authors": [
"MaxouOS",
"geheur"
],
"repo": "geheur/weapon-animation-replacer",
"url": "https://github.com/geheur/weapon-animation-replacer/issues/41",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
306015174 | C89: lfs.c:638:24: error: taking address of temporary array and other instances
To resolve this problem for my project I simply moved the declaration of the temporary struct out of the function call and then used it like so,
struct lfs_region region[] = {
{entry->off, sizeof(entry->d), &entry->d, sizeof(entry->d)},
{entry->off+sizeof(entry->d), entry->d.nlen, data, entry->d.nlen}
};
int err = lfs_dir_commit(lfs, dir, region, data ? 2 : 1);
This happens in several other places
lfs.c:654:21: error: taking address of temporary array
lfs.c:674:21: error: taking address of temporary array
lfs.c:714:13: error: taking address of temporary array
lfs.c:1119:63: warning: taking address of temporary
What's interesting is sometimes I just get a warning for it and others it's an error even though it all gets the same flags.
I think this is related to https://github.com/geky/littlefs/issues/39 (C89) but correct me if I'm wrong.
The disparity between warnings and errors may be because some of them are const?
Yeah it's definitely a C89 vs C99 issue. I managed to rework LittleFS to work in C89 though not sure if that is a goal for your project.
I always prefer writing against C89 because sometimes you run into that one tool chain for a project that doesn't support C99 but I would understand if your project makes C99 a minimum requirement there aren't many embedded SDK's that don't do C99 now a days.
| gharchive/issue | 2018-03-16T17:30:57 | 2025-04-01T06:44:16.400477 | {
"authors": [
"geky",
"kgillespieatmosphere"
],
"repo": "geky/littlefs",
"url": "https://github.com/geky/littlefs/issues/40",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
149799954 | Default view to mask out redundant automatic annotations
"Strip out redundant TAS/ISS/IEA etc annotations that are covered by experimental annotations"
From the Geneva meeting, minutes are sketchy, but here's the link:
https://docs.google.com/document/d/12CDf4s8YsylPMNHM3UuPis9NyVQ9WFpBpNYTFYAH7kc/edit#heading=h.t9fwrmp9dd6o
Basically reduce the noise level. These computed annotations offer no additional information about the gene product.
I believe this is a dupe of #43, or should at least be merged over.
And/or #294.
| gharchive/issue | 2016-04-20T15:15:48 | 2025-04-01T06:44:16.444373 | {
"authors": [
"kltm",
"selewis"
],
"repo": "geneontology/amigo",
"url": "https://github.com/geneontology/amigo/issues/347",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
264354161 | NTR: negative regulation of autoproteolysis
Potential GO term name: negative regulation of autoproteolysis
Definition: Any process that stops, prevents or reduces the frequency, rate or extent of the hydrolysis of a peptide bond or bonds within a protein catalyzed by the protein itself.
Aspect: Biological Process
-Relationships:
(Parent Term) GO:0045861 negative regulation of proteolysis
-References: PMID 23303392
-Synonyms: negative regulation of self-proteolysis; negative regulation of autocleavage
This is for CACAO.
@sfmcguinness and @jimhu-tamu
I'm looking at this part of the ontology, and seeing two different possible parent terms, neither of which are 'autoproteolysis':
GO:0019538 protein metabolic process
-GO:0006508 proteolysis
--GO:0097264 self proteolysis
---GO:1990092 calcium-dependent self proteolysis
---GO:1990091 sodium-dependent self proteolysis
GO:0019538 protein metabolic process
-GO:0051604 protein maturation
-GO:0006508 proteolysis
--GO:0016485 protein processing
--GO:0016540 protein autoprocessing
The definitions are:
GO:0097264 self-proteolysis
The hydrolysis of proteins into smaller polypeptides and/or amino acids by cleavage of their own peptide bonds. Source: GOC:yaf, PMID:18676612, PMID:19144634
GO:0016540 protein autoprocessing
Processing which a protein carries out itself. This involves actions such as the autolytic removal of residues to generate the mature form of the protein. Source: GOC:ai, PMID:9335337
@ukemi - would it make sense to move GO:0097264 to be an is_a child of GO:0016540?
Also, @jimhu-tamu - Do the CACAO students use annotation extensions? I am wondering here about the correct usage and interpretation of the proposed term 'negative regulation of autoproteolysis'.
This paper sounds like a good candidate for a GO-CAM model.
would it make sense to move GO:0097264 to be an is_a child of GO:0016540?
It would only make sense if self-proteolysis always leads to the mature form of a protein. In any other context, we would break the true path rule.
PMID:24779472 suggests that it would break the true path rule.
Okay, thanks @ukemi
@jimhu-tamu and @sfmcguinness since we don't have 'autoproteolysis' but instead have 'self proteolysis' here's my proposal for term name and placement in the ontology:
GO:0045861 negative regulation of proteolysis
-GO:new negative regulation of self proteolysis
Synonyms: negative regulation of self-proteolysis; negative regulation of autocleavage, negative regulation of autoproteolysis, negative regulation of autolysis
I would also propose to add synonyms to self proteolysis (GO:0097264):
Existing synonyms: self-proteolysis, autolysis
New synonyms: autocleavage, autoproteolysis
Does that look okay to you?
yes but I would change the synonyms to remove autolysis. Autolysis is not a synonym for autoproteolysis, since autolysis is usually refers to a process at the cellular level.
https://en.wikipedia.org/wiki/Autolysis_(biology)
Thanks, @jimhu-tamu @sfmcguinness
In looking at the parentage for the new term, I see that it would map up to the 'cellular protein metabolic process' branch of the BP, but wouldn't have any relation to the 'viral processes' branch of the BP. Should the new term instead be under the 'viral processes' branch?
If so, then perhaps the new term should instead be a child of something like GO:0046726 positive regulation by virus of viral protein levels in host cell:
Although there are only four experimentally based annotations to GO:0046726 and they are all to human proteins, so I'm a bit uncertain about how this term was intended to be used.
@pgaudet - would annotating a viral protein to a term like GO:0046726 (or a more specific child term like 'negative regulation by virus of viral protein self proteolysis') be consistent with how the viral group would interpret its meaning?
In summary, I think the biology here is:
Hex (phage protein) -| RecA (bacterial protein) -> 434 (phage protein) self proteolysis
@jimhu-tamu @sfmcguinness - is that right?
Sorry this has gotten complicated, but I just want to make sure we get the biology and ontology parentage correct.
It always gets complicated! Your diagram looks right to me. But note that the
RecA stimulation of autocleavage is itself regulated by RecA activities/SOS response to DNA damage
The RecA stimulated autocleavage happens for both viral and host proteins. RecA stimulates cleavage of LexA, the repressor of the SOS response, and several other proteins, including subunit of an error-prone translesion DNA polymerase.
Hi @vanaukenk
Most annotations to the terms:
regulation by virus of viral protein levels in host cell
positive regulation by virus of viral protein levels in host cell
negative regulation by virus of viral protein levels in host cell
are indeed to host proteins and therefore wrong; I will challenge them.
Hi @pgaudet
What annotations to host proteins are you referring to? Not the original annotation we want to make, I hope.
Hi @jimhu-tamu
Most annotations are to host proteins:
TAIR:locus:2184362 AT5G04430 GO:0046719 TAIR NCBITaxon:3702 IMP TAIR:Publication:501727403|PMID:18762309 20080929
UniProtKB:Q96BZ9 TBC1 domain family member 20 GO:0046726 AgBase NCBITaxon:9606 IMP PMID:17686842 20141112
UniProtKB:Q16531 DNA damage-binding protein 1 GO:0046726 AgBase NCBITaxon:9606 IMP PMID:23137809 20141218
UniProtKB:P42224 Signal transducer and activator of transcription 1-alpha/beta GO:0046725 AgBase NCBITaxon:9606 IMP PMID:15825084 20141112
UniProtKB:P63172 Dynein light chain Tctex-type 1 GO:0019060 UniProt NCBITaxon:9606 IMP PMID:18647839 20100507
UniProtKB:P09914 Interferon-induced protein with tetratricopeptide repeats 1 GO:0019060 BHF-UCL NCBITaxon:9606 IDA PMID:19008854 20101230
UniProtKB:Q9P035 Very-long-chain (3R)-3-hydroxyacyl-CoA dehydratase 3 GO:0046726 AgBase NCBITaxon:9606 IMP PMID:18160438 20150115
UniProtKB:P67868 Casein kinase II subunit beta GO:0046719 AgBase NCBITaxon:9913 IMP PMID:19692545 20160126
UniProtKB:O95793 Double-stranded RNA-binding protein Staufen homolog 1 GO:0046726 AgBase NCBITaxon:9606 IMP PMID:23907398 20141112
@pgaudet
Initially that makes sense, and a quick scan suggests to me that many are wrong. But it may hard to tell just by looking at the gaf for the following reason:
if we have a process that is a chain of regulatory steps, wouldn't we annotate both host and viral proteins involved?
In the paper being curated by @sfmcguinness, hex is a viral protein, which regulates RecA, a host protein, which regulates cI, a viral protein, which regulates a program of viral gene expression. From a process perspective, shouldn't RecA be part of regulation by virus of protein levels in host cell? (Or whatever we use for regulation of viral transcription).
If yes, then some or all of those might be correct.
Hi @jimhu-tamu
I don't know why we came on the topic of viral process for this term in fact - why was this suggested already ?
Thanks, Pascale
I'm confused too.
;) @vanaukenk ? Help please !
Hi @pgaudet and @jimhu-tamu
The reason the viral processes issue came up is that with the first suggested placement of the requested term in the ontology, there would have been no indication, by following the parentage, that what was being captured here was a viral process (if, indeed it is - I thought it was).
See my comment above with the screenshots of the ontology in AmiGO.
Thanks @vanaukenk !
So looking at the original reference (PMID:23303392): let me see if I understand the biology:
Bacterial RecA normally triggers phage proteins to perform autoproteolysis, so this is some sort of "defense response to virus", and it should also be OK to annotate to "positive regulation of autoproteolysis" (which in this case is not a viral process). You could even use the term 'negative regulation by host of viral transcription' which is the phage target of RecA.
Phage Hex negatively regulates RecA. This could captured under 'modulation by symbiont of host defense response'; perhaps under that term you want to create 'inhibition by virus of negative regulation by host of viral transcription' (but that's a mouthful! and a Noctua model probably).
Another way to capture the function of phage Hex could be under 'suppression by virus of host molecular function', under which we could have something like 'suppression by virus of host protease activator activity'.
If that makes sense, then the only term you need to create would be 'suppression by virus of host protease activator activity'.
Thanks, Pascale
I like this suggestion:
Another way to capture the function of phage Hex could be under 'suppression by virus of host molecular function', under which we could have something like 'suppression by virus of host protease activator activity'.
@jimhu-tamu @sfmcguinness What do you think?
| gharchive/issue | 2017-10-10T19:50:45 | 2025-04-01T06:44:16.478617 | {
"authors": [
"jimhu-tamu",
"pgaudet",
"sfmcguinness",
"ukemi",
"vanaukenk"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/14341",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
652082626 | modify definitions of GO:0043886 and GO:0031470
Hello,
GO:0043886 (structural constituent of carboxysome, an MF term) has the following definition:
"The action of a molecule that contributes to the structural integrity of a carboxysome, an organelle found in the Cyanobacteria consisting of a proteinaceous coat and enzymes for the fixation of carbon dioxide. "
--carboxysomes also exist in some chemoautotrophs, (see PMID 28934381 for a review) so the definition is too narrow. I suggest the following:
"The action of a molecule that contributes to the structural integrity of a carboxysome, an organelle found in all cyanobacteria and some chemoautotrophs, consisting of a proteinaceous coat and enzymes for the fixation of CO(2). "
There is also an improvement possible for the carboxysome term in GO (GO:0031470). It says:
"An organelle consisting of a proteinaceous coat and enzymes for the fixation of carbon dioxide including mechanisms for the concentration of carbonate to increase the efficiency of fixation under low-carbon dioxide conditions."
--which isn't quite true, they are required under atmospheric conditions, and to increase CO(2) levels (the RuBisCO substrate), not carbonate. So:
"An organelle consisting of a proteinaceous coat and enzymes for the fixation of CO(2). It augments the concentration of CO(2) in the vicinity of RuBisCO to increase the efficiency of CO(2) fixation under atmospheric conditions."
Thanks, Andrea
Hi Andrea,
(We exchanged about this a few days ago): following that discussion - are there proteins whose function is to maintain the structure of the carboxysome? Or should that term be obsoleted ? (I see there are no annotations)
Thanks, Pascale
Hi Pascale,
There are a number of proteins that form the carboxysome; the shells are formed by CcmK/CsoS1 paralogues, the vertices by CcmL/EutN/CsoS4/PduL proteins, probable minor shell proteins such as CcmO and cCmP, with other proteins that also help structurally and with assembly, CcmM, CcmN (note I didn't go looking for all their different names in different compartments). Do proteins that form the carboxysome also maintain the carboxysome? I'm still having problems understanding which proteins I would annotate with GO:0043886...
Thanks, Andrea
| gharchive/issue | 2020-07-07T07:39:53 | 2025-04-01T06:44:16.483833 | {
"authors": [
"AndreaAuchincloss",
"pgaudet"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/19743",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2284837163 | Renamed ESPNetwork auth property to security
Renamed ESPNetwork auth property to security
Added ESPWifiAuthMode enum
Closing this PR since #15 has been fixed.
| gharchive/pull-request | 2024-05-08T07:00:24 | 2025-04-01T06:44:16.485008 | {
"authors": [
"jolson474",
"otiv33"
],
"repo": "general-galactic/capacitor-esp-idf-provisioning",
"url": "https://github.com/general-galactic/capacitor-esp-idf-provisioning/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2255174879 | Crear documentacion sobre "GitHub Page y Jekyll"
Crear el repositorio
Instalaciones necesario
2.1. https://github.com/rbenv/rbenv#installing-ruby-versions
2.2. https://jekyllrb.com/
Editar el contenido de la pagina (.md)
Implementar tema con remote_theme
| gharchive/issue | 2024-04-21T17:02:40 | 2025-04-01T06:44:16.487153 | {
"authors": [
"genericoltda"
],
"repo": "generico-LTDA/generico",
"url": "https://github.com/generico-LTDA/generico/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2610977606 | flash-attn install error
torch version is 2.5.0
when installing flash-attn got the error
File "<string>", line 11, in <module>
File "/root/wsy/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 431, in build_wheel
return _build(['bdist_wheel'])
File "/root/wsy/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 422, in _build
return self._build_with_temp_dir(
File "/root/wsy/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 403, in _build_with_temp_dir
self.run_setup()
File "/root/wsy/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 516, in run_setup
super().run_setup(setup_script=setup_script)
File "/root/wsy/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 21, in <module>
File "/root/wsy/models/.venv/lib/python3.10/site-packages/torch/__init__.py", line 368, in <module>
from torch._C import * # noqa: F403
ImportError: /root/wsy/models/.venv/lib/python3.10/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
i solve this issue by install the torch==2.6.0.dev20241023+cu121
but run the infer code, got this error:
Traceback (most recent call last):
File "/root/wsy/models/src/mochi_preview/infer.py", line 172, in <module>
generate_cli()
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/root/wsy/models/src/mochi_preview/infer.py", line 159, in generate_cli
output = generate_video(
File "/root/wsy/models/src/mochi_preview/infer.py", line 73, in generate_video
load_model()
File "/root/wsy/models/src/mochi_preview/infer.py", line 38, in load_model
model = MochiWrapper(
File "/root/wsy/models/src/mochi_preview/handler.py", line 25, in __init__
ray.get(worker.__ray_ready__.remote())
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/ray/_private/worker.py", line 2745, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/ray/_private/worker.py", line 903, in get_objects
raise value
ray.exceptions.ActorDiedError: The actor died because of an error raised in its creation task, ray::T2VSynthMochiModel.__init__() (pid=2006211, ip=172.18.0.42, actor_id=cfb1eb3321c66dc48d223a9601000000, repr=<mochi_preview.t2v_synth_mochi.T2VSynthMochiModel object at 0x7f25f20461d0>)
File "/root/wsy/models/src/mochi_preview/t2v_synth_mochi.py", line 251, in __init__
from mochi_preview.dit.joint_model.asymm_models_joint import (
File "/root/wsy/models/src/mochi_preview/dit/joint_model/asymm_models_joint.py", line 8, in <module>
from flash_attn import flash_attn_varlen_qkvpacked_func
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/flash_attn/__init__.py", line 3, in <module>
from flash_attn.flash_attn_interface import (
File "/root/anaconda3/envs/mo1/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 10, in <module>
import flash_attn_2_cuda as flash_attn_cuda
ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory
Wow... quite a large (and undecipherable) install error output, which appears to be flash_attn related...
(.venv) ➜ models git:(main) uv pip install -e . --no-build-isolation
Resolved 91 packages in 33ms
error: Failed to prepare distributions
Caused by: Failed to fetch wheel: flash-attn==2.6.3
Caused by: Build backend failed to build wheel through build_wheel() with exit status: 1
--- stdout:
torch.version = 2.5.0+cu124
running bdist_wheel
Guessing wheel URL: https://github.com/Dao-AILab/flash-attention/releases/download/v2.6.3/flash_attn-2.6.3+cu123torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
Precompiled wheel not found. Building from source...
running build
running build_py
copying hopper/benchmark_attn.py -> build/lib.linux-x86_64-cpython-310/hopper
copying hopper/init.py -> build/lib.linux-x86_64-cpython-310/hopper
copying hopper/test_flash_attn.py -> build/lib.linux-x86_64-cpython-310/hopper
copying hopper/setup.py -> build/lib.linux-x86_64-cpython-310/hopper
copying hopper/flash_attn_interface.py -> build/lib.linux-x86_64-cpython-310/hopper
copying flash_attn/flash_blocksparse_attn_interface.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/flash_blocksparse_attention.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/bert_padding.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/flash_attn_interface.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/flash_attn_triton_og.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/flash_attn_triton.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/fused_softmax.py -> build/lib.linux-x86_64-cpython-310/flash_attn
copying flash_attn/models/gptj.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/gpt.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/vit.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/gpt_neox.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/falcon.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/llama.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/bert.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/opt.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/baichuan.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/btlm.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/models/bigcode.py -> build/lib.linux-x86_64-cpython-310/flash_attn/models
copying flash_attn/ops/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops
copying flash_attn/ops/activations.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops
copying flash_attn/ops/rms_norm.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops
copying flash_attn/ops/layer_norm.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops
copying flash_attn/ops/fused_dense.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops
copying flash_attn/layers/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/layers
copying flash_attn/layers/patch_embed.py -> build/lib.linux-x86_64-cpython-310/flash_attn/layers
copying flash_attn/layers/rotary.py -> build/lib.linux-x86_64-cpython-310/flash_attn/layers
copying flash_attn/losses/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/losses
copying flash_attn/losses/cross_entropy.py -> build/lib.linux-x86_64-cpython-310/flash_attn/losses
copying flash_attn/utils/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/utils
copying flash_attn/utils/benchmark.py -> build/lib.linux-x86_64-cpython-310/flash_attn/utils
copying flash_attn/utils/generation.py -> build/lib.linux-x86_64-cpython-310/flash_attn/utils
copying flash_attn/utils/distributed.py -> build/lib.linux-x86_64-cpython-310/flash_attn/utils
copying flash_attn/utils/pretrained.py -> build/lib.linux-x86_64-cpython-310/flash_attn/utils
copying flash_attn/modules/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/modules
copying flash_attn/modules/block.py -> build/lib.linux-x86_64-cpython-310/flash_attn/modules
copying flash_attn/modules/mlp.py -> build/lib.linux-x86_64-cpython-310/flash_attn/modules
copying flash_attn/modules/mha.py -> build/lib.linux-x86_64-cpython-310/flash_attn/modules
copying flash_attn/modules/embedding.py -> build/lib.linux-x86_64-cpython-310/flash_attn/modules
copying flash_attn/ops/triton/init.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
copying flash_attn/ops/triton/mlp.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
copying flash_attn/ops/triton/cross_entropy.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
copying flash_attn/ops/triton/layer_norm.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
copying flash_attn/ops/triton/rotary.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
copying flash_attn/ops/triton/k_activations.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
copying flash_attn/ops/triton/linear.py -> build/lib.linux-x86_64-cpython-310/flash_attn/ops/triton
running build_ext
building 'flash_attn_2_cuda' extension
[1/84] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o.d -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/cutlass/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/TH -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/THC -I/opt/cuda/include -I/home/jw/store/src/models/.venv/include -I/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/include/python3.10 -c -c /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.cu -o /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=flash_attn_2_cuda -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o
/opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o.d -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/cutlass/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/TH -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/THC -I/opt/cuda/include -I/home/jw/store/src/models/.venv/include -I/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/include/python3.10 -c -c /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.cu -o /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=flash_attn_2_cuda -D_GLIBCXX_USE_CXX11_ABI=0
/usr/include/c++/14.2.1/x86_64-pc-linux-gnu/bits/c++config.h(827): error: user-defined literal operator not found
typedef __decltype(0.0bf16) __bfloat16_t;
^
/usr/include/c++/14.2.1/type_traits(529): error: type name is not allowed
: public __bool_constant<__is_array(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(529): error: identifier "__is_array" is undefined
: public __bool_constant<__is_array(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(581): error: type name is not allowed
: public __bool_constant<__is_member_object_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(581): error: identifier "__is_member_object_pointer" is undefined
: public __bool_constant<__is_member_object_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(603): error: type name is not allowed
: public __bool_constant<__is_member_function_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(603): error: identifier "__is_member_function_pointer" is undefined
: public __bool_constant<__is_member_function_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(695): error: type name is not allowed
: public __bool_constant<__is_reference(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(695): error: identifier "__is_reference" is undefined
: public __bool_constant<__is_reference(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(731): error: type name is not allowed
: public __bool_constant<__is_object(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(731): error: identifier "__is_object" is undefined
: public __bool_constant<__is_object(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(760): error: type name is not allowed
: public __bool_constant<__is_member_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(760): error: identifier "__is_member_pointer" is undefined
: public __bool_constant<__is_member_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(3247): error: type name is not allowed
inline constexpr bool is_array_v = __is_array(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3271): error: type name is not allowed
__is_member_object_pointer(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3281): error: type name is not allowed
__is_member_function_pointer(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3298): error: type name is not allowed
inline constexpr bool is_reference_v = __is_reference(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3315): error: type name is not allowed
inline constexpr bool is_object_v = __is_object(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3328): error: type name is not allowed
inline constexpr bool is_member_pointer_v = __is_member_pointer(_Tp);
^
/usr/include/c++/14.2.1/bits/utility.h(237): error: __type_pack_element is not a template
{ using type = __type_pack_element<_Np, _Types...>; };
^
/usr/include/c++/14.2.1/type_traits(138): error: class "std::enable_if<, void>" has no member "type"
using __enable_if_t = typename enable_if<_Cond, _Tp>::type;
^
detected during:
instantiation of type "std::__enable_if_t<, void>" at line 176
instantiation of "std::__detail::__or_fn" based on template arguments <std::is_referencestd::_Any_data, std::is_functionstd::_Any_data, std::is_voidstd::_Any_data, std::__is_array_unknown_boundsstd::_Any_data> at line 194
instantiation of class "std::_or<_Bn...> [with _Bn=<std::is_referencestd::_Any_data, std::is_functionstd::_Any_data, std::is_voidstd::_Any_data, std::__is_array_unknown_boundsstd::_Any_data>]" at line 1171
instantiation of class "std::is_move_constructible<_Tp> [with _Tp=std::_Any_data]" at line 199
instantiation of class "std::_and<_Bn...> [with _Bn=<std::_not<std::__is_tuple_likestd::_Any_data>, std::is_move_constructiblestd::_Any_data, std::is_move_assignablestd::_Any_data>]" at line 558 of /usr/include/c++/14.2.1/bits/std_function.h
/opt/cuda/include/cuda/std/__type_traits/is_array.h(33): error: type name is not allowed
struct attribute((visibility("default"))) is_array : public integral_constant<bool, __is_array(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_array.h(33): error: identifier "__is_array" is undefined
struct attribute((visibility("default"))) is_array : public integral_constant<bool, __is_array(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_array.h(38): error: type name is not allowed
inline constexpr bool is_array_v = __is_array(_Tp);
^
/opt/cuda/include/cuda/std/__type_traits/is_object.h(34): error: type name is not allowed
struct attribute((visibility("default"))) is_object : public integral_constant<bool, __is_object(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_object.h(34): error: identifier "__is_object" is undefined
struct attribute((visibility("default"))) is_object : public integral_constant<bool, __is_object(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_object.h(39): error: type name is not allowed
inline constexpr bool is_object_v = __is_object(_Tp);
^
/opt/cuda/include/cuda/std/__tuple_dir/tuple_element.h(121): error: __type_pack_element is not a template
typedef __type_pack_element<_Ip, _Types...> type;
^
/opt/cuda/include/cuda/std/__tuple_dir/make_tuple_types.h(50): error: identifier "__type_pack_element" is undefined
__tuple_types<typename _ApplyFn::template __apply<__type_pack_element<_Idx, _Types...>>...>;
^
/opt/cuda/include/cuda/std/__tuple_dir/make_tuple_types.h(50): error: parameter pack "_Idx" was referenced but not expanded
__tuple_types<typename _ApplyFn::template __apply<__type_pack_element<_Idx, _Types...>>...>;
^
/opt/cuda/include/cuda/std/__tuple_dir/make_tuple_types.h(50): error: expected a ";"
__tuple_types<typename _ApplyFn::template __apply<__type_pack_element<_Idx, _Types...>>...>;
^
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: excessive recursion at instantiation of class "cuda::std::__4::common_type<double ********************************************************************************************************************************************************************************************************, double ********************************************************************************************************************************************************************************************************>"
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************, _D1=double ********************************************************************************************************************************************************************************************************, _D2=double ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************, _D1=double *******************************************************************************************************************************************************************************************************, _D2=double *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *****************************************************************************************************************************************************************************************************, _Up=double *****************************************************************************************************************************************************************************************************, _D1=double ******************************************************************************************************************************************************************************************************, _D2=double ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double **, _Up=double **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *, _Up=double *, _D1=double **, _D2=double **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *, _Up=double *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double, _Up=double, _D1=double *, _D2=double *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double, _Up=double]" at line 353 of /opt/cuda/include/thrust/detail/complex/catrig.h
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: incomplete type "cuda::std::__4::common_type<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>, cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" (aka "cuda::std::__4::common_type<double ********************************************************************************************************************************************************************************************************, double ********************************************************************************************************************************************************************************************************>") is not allowed
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************, _D1=double ********************************************************************************************************************************************************************************************************, _D2=double ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************, _D1=double *******************************************************************************************************************************************************************************************************, _D2=double *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *****************************************************************************************************************************************************************************************************, _Up=double *****************************************************************************************************************************************************************************************************, _D1=double ******************************************************************************************************************************************************************************************************, _D2=double ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double **, _Up=double **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *, _Up=double *, _D1=double **, _D2=double **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *, _Up=double *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double, _Up=double, _D1=double *, _D2=double *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double, _Up=double]" at line 353 of /opt/cuda/include/thrust/detail/complex/catrig.h
/opt/cuda/include/thrust/detail/complex/catrig.h(353): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const double
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrig.h(353): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(353): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(353): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const double
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: excessive recursion at instantiation of class "cuda::std::__4::common_type<float ********************************************************************************************************************************************************************************************************, float ********************************************************************************************************************************************************************************************************>"
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************, _D1=float ********************************************************************************************************************************************************************************************************, _D2=float ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************, _D1=float *******************************************************************************************************************************************************************************************************, _D2=float *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *****************************************************************************************************************************************************************************************************, _Up=float *****************************************************************************************************************************************************************************************************, _D1=float ******************************************************************************************************************************************************************************************************, _D2=float ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float **, _Up=float **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *, _Up=float *, _D1=float **, _D2=float **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *, _Up=float *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float, _Up=float, _D1=float *, _D2=float *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float, _Up=float]" at line 219 of /opt/cuda/include/thrust/detail/complex/catrigf.h
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: incomplete type "cuda::std::__4::common_type<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>, cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" (aka "cuda::std::__4::common_type<float ********************************************************************************************************************************************************************************************************, float ********************************************************************************************************************************************************************************************************>") is not allowed
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************, _D1=float ********************************************************************************************************************************************************************************************************, _D2=float ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************, _D1=float *******************************************************************************************************************************************************************************************************, _D2=float *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *****************************************************************************************************************************************************************************************************, _Up=float *****************************************************************************************************************************************************************************************************, _D1=float ******************************************************************************************************************************************************************************************************, _D2=float ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float **, _Up=float **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *, _Up=float *, _D1=float **, _D2=float **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *, _Up=float *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float, _Up=float, _D1=float *, _D2=float *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float, _Up=float]" at line 219 of /opt/cuda/include/thrust/detail/complex/catrigf.h
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const float
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const float
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(149): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * double
return (result * 2.0);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(149): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 2.0);
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(153): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * double
return (result * 0.5);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(153): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 0.5);
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(151): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * float
return (result * 2.0f);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(151): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 2.0f);
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(155): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * float
return (result * 0.5f);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(155): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 0.5f);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h(1928): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; } else { attribute((unused)) static const auto torch_warn_once_0 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h(1928): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; } else { attribute((unused)) static const auto torch_warn_once_0 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/usr/include/c++/14.2.1/type_traits(3155): error: class "std::invoke_result<lambda ->c10::ScalarType>" has no member "type"
using invoke_result_t = typename invoke_result<_Fn, _Args...>::type;
^
detected during:
instantiation of type "std::invoke_result_t<lambda ->c10::ScalarType>" at line 34 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Optional.h
instantiation of "T c10::value_or_else(const std::optional &, F &&) [with T=c10::ScalarType, F=lambda ->c10::ScalarType]" at line 32 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h
/usr/include/c++/14.2.1/type_traits(3155): error: class "std::invoke_result<lambda ->caffe2::TypeMeta>" has no member "type"
using invoke_result_t = typename invoke_result<_Fn, _Args...>::type;
^
detected during:
instantiation of type "std::invoke_result_t<lambda ->caffe2::TypeMeta>" at line 34 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Optional.h
instantiation of "T c10::value_or_else(const std::optional &, F &&) [with T=caffe2::TypeMeta, F=lambda ->caffe2::TypeMeta]" at line 37 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h
/usr/include/c++/14.2.1/type_traits(3155): error: class "std::invoke_result<lambda ->c10::Device>" has no member "type"
using invoke_result_t = typename invoke_result<_Fn, _Args...>::type;
^
detected during:
instantiation of type "std::invoke_result_t<lambda ->c10::Device>" at line 34 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Optional.h
instantiation of "T c10::value_or_else(const std::optional &, F &&) [with T=c10::Device, F=lambda ->c10::Device]" at line 45 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h(74): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, const char *, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; } else { attribute((unused)) static const auto torch_warn_once_1 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h(74): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, const char *, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; } else { attribute((unused)) static const auto torch_warn_once_1 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/env.h(32): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/env.h", static_cast<uint32_t>(32)}, ::c10::str("Ignoring invalid value for boolean flag ", name, ": ", envar, "valid values are 0 or 1."), false));;
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/NamedTensor.h(40): error: no suitable user-defined conversion from "std::__detail::_unique_ptr_tat::NamedTensorMeta" (aka "std::unique_ptr<at::NamedTensorMeta, std::default_deleteat::NamedTensorMeta>") to "std::unique_ptr<c10::NamedTensorMetaInterface, std::default_deletec10::NamedTensorMetaInterface>" exists
return std::make_unique(HasNonWildcard, names);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/CUDAGraphsC10Utils.h(26): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t _err = cudaThreadExchangeStreamCaptureMode(&strictness); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/CUDAGraphsC10Utils.h", static_cast<uint32_t>(26)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/usr/include/c++/14.2.1/bits/shared_ptr_base.h(1347): error: reference to void is not allowed
element_type&
^
detected during:
instantiation of class "std::__shared_ptr_access<_Tp, _Lp, , > [with _Tp=void, _Lp=__gnu_cxx::_S_atomic, =, =true]" at line 1424
instantiation of class "std::__shared_ptr<_Tp, _Lp> [with _Tp=void, _Lp=__gnu_cxx::_S_atomic]" at line 175 of /usr/include/c++/14.2.1/bits/shared_ptr.h
instantiation of class "std::shared_ptr<_Tp> [with _Tp=void]" at line 421 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/CUDACachingAllocator.h
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(46): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = err; if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(46)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(57): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::MaybeSetDevice(d.index()); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(57)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(112): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::GetDevice(&orig_device); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(112)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(113): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::SetDevice(device_index); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(113)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(119): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = cudaEventDestroy(cuda_event); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(119)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(120): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::SetDevice(orig_device); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(120)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
60 errors detected in the compilation of "/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.cu".
[2/84] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.o.d -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/cutlass/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/TH -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/THC -I/opt/cuda/include -I/home/jw/store/src/models/.venv/include -I/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/include/python3.10 -c -c /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.cu -o /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=flash_attn_2_cuda -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.o
/opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.o.d -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src -I/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/cutlass/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/TH -I/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/THC -I/opt/cuda/include -I/home/jw/store/src/models/.venv/include -I/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/include/python3.10 -c -c /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.cu -o /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=flash_attn_2_cuda -D_GLIBCXX_USE_CXX11_ABI=0
/usr/include/c++/14.2.1/x86_64-pc-linux-gnu/bits/c++config.h(827): error: user-defined literal operator not found
typedef __decltype(0.0bf16) __bfloat16_t;
^
/usr/include/c++/14.2.1/type_traits(529): error: type name is not allowed
: public __bool_constant<__is_array(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(529): error: identifier "__is_array" is undefined
: public __bool_constant<__is_array(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(581): error: type name is not allowed
: public __bool_constant<__is_member_object_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(581): error: identifier "__is_member_object_pointer" is undefined
: public __bool_constant<__is_member_object_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(603): error: type name is not allowed
: public __bool_constant<__is_member_function_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(603): error: identifier "__is_member_function_pointer" is undefined
: public __bool_constant<__is_member_function_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(695): error: type name is not allowed
: public __bool_constant<__is_reference(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(695): error: identifier "__is_reference" is undefined
: public __bool_constant<__is_reference(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(731): error: type name is not allowed
: public __bool_constant<__is_object(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(731): error: identifier "__is_object" is undefined
: public __bool_constant<__is_object(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(760): error: type name is not allowed
: public __bool_constant<__is_member_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(760): error: identifier "__is_member_pointer" is undefined
: public __bool_constant<__is_member_pointer(_Tp)>
^
/usr/include/c++/14.2.1/type_traits(3247): error: type name is not allowed
inline constexpr bool is_array_v = __is_array(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3271): error: type name is not allowed
__is_member_object_pointer(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3281): error: type name is not allowed
__is_member_function_pointer(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3298): error: type name is not allowed
inline constexpr bool is_reference_v = __is_reference(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3315): error: type name is not allowed
inline constexpr bool is_object_v = __is_object(_Tp);
^
/usr/include/c++/14.2.1/type_traits(3328): error: type name is not allowed
inline constexpr bool is_member_pointer_v = __is_member_pointer(_Tp);
^
/usr/include/c++/14.2.1/bits/utility.h(237): error: __type_pack_element is not a template
{ using type = __type_pack_element<_Np, _Types...>; };
^
/usr/include/c++/14.2.1/type_traits(138): error: class "std::enable_if<, void>" has no member "type"
using __enable_if_t = typename enable_if<_Cond, _Tp>::type;
^
detected during:
instantiation of type "std::__enable_if_t<, void>" at line 176
instantiation of "std::__detail::__or_fn" based on template arguments <std::is_referencestd::_Any_data, std::is_functionstd::_Any_data, std::is_voidstd::_Any_data, std::__is_array_unknown_boundsstd::_Any_data> at line 194
instantiation of class "std::_or<_Bn...> [with _Bn=<std::is_referencestd::_Any_data, std::is_functionstd::_Any_data, std::is_voidstd::_Any_data, std::__is_array_unknown_boundsstd::_Any_data>]" at line 1171
instantiation of class "std::is_move_constructible<_Tp> [with _Tp=std::_Any_data]" at line 199
instantiation of class "std::_and<_Bn...> [with _Bn=<std::_not<std::__is_tuple_likestd::_Any_data>, std::is_move_constructiblestd::_Any_data, std::is_move_assignablestd::_Any_data>]" at line 558 of /usr/include/c++/14.2.1/bits/std_function.h
/opt/cuda/include/cuda/std/__type_traits/is_array.h(33): error: type name is not allowed
struct attribute((visibility("default"))) is_array : public integral_constant<bool, __is_array(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_array.h(33): error: identifier "__is_array" is undefined
struct attribute((visibility("default"))) is_array : public integral_constant<bool, __is_array(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_array.h(38): error: type name is not allowed
inline constexpr bool is_array_v = __is_array(_Tp);
^
/opt/cuda/include/cuda/std/__type_traits/is_object.h(34): error: type name is not allowed
struct attribute((visibility("default"))) is_object : public integral_constant<bool, __is_object(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_object.h(34): error: identifier "__is_object" is undefined
struct attribute((visibility("default"))) is_object : public integral_constant<bool, __is_object(_Tp)>
^
/opt/cuda/include/cuda/std/__type_traits/is_object.h(39): error: type name is not allowed
inline constexpr bool is_object_v = __is_object(_Tp);
^
/opt/cuda/include/cuda/std/__tuple_dir/tuple_element.h(121): error: __type_pack_element is not a template
typedef __type_pack_element<_Ip, _Types...> type;
^
/opt/cuda/include/cuda/std/__tuple_dir/make_tuple_types.h(50): error: identifier "__type_pack_element" is undefined
__tuple_types<typename _ApplyFn::template __apply<__type_pack_element<_Idx, _Types...>>...>;
^
/opt/cuda/include/cuda/std/__tuple_dir/make_tuple_types.h(50): error: parameter pack "_Idx" was referenced but not expanded
__tuple_types<typename _ApplyFn::template __apply<__type_pack_element<_Idx, _Types...>>...>;
^
/opt/cuda/include/cuda/std/__tuple_dir/make_tuple_types.h(50): error: expected a ";"
__tuple_types<typename _ApplyFn::template __apply<__type_pack_element<_Idx, _Types...>>...>;
^
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: excessive recursion at instantiation of class "cuda::std::__4::common_type<double ********************************************************************************************************************************************************************************************************, double ********************************************************************************************************************************************************************************************************>"
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************, _D1=double ********************************************************************************************************************************************************************************************************, _D2=double ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************, _D1=double *******************************************************************************************************************************************************************************************************, _D2=double *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *****************************************************************************************************************************************************************************************************, _Up=double *****************************************************************************************************************************************************************************************************, _D1=double ******************************************************************************************************************************************************************************************************, _D2=double ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double **, _Up=double **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *, _Up=double *, _D1=double **, _D2=double **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *, _Up=double *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double, _Up=double, _D1=double *, _D2=double *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double, _Up=double]" at line 353 of /opt/cuda/include/thrust/detail/complex/catrig.h
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: incomplete type "cuda::std::__4::common_type<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>, cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" (aka "cuda::std::__4::common_type<double ********************************************************************************************************************************************************************************************************, double ********************************************************************************************************************************************************************************************************>") is not allowed
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************, _D1=double ********************************************************************************************************************************************************************************************************, _D2=double ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *******************************************************************************************************************************************************************************************************, _Up=double *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************, _D1=double *******************************************************************************************************************************************************************************************************, _D2=double *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double ******************************************************************************************************************************************************************************************************, _Up=double ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *****************************************************************************************************************************************************************************************************, _Up=double *****************************************************************************************************************************************************************************************************, _D1=double ******************************************************************************************************************************************************************************************************, _D2=double ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double **, _Up=double **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double *, _Up=double *, _D1=double **, _D2=double **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double *, _Up=double *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=double, _Up=double, _D1=double *, _D2=double *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=double, _Up=double]" at line 353 of /opt/cuda/include/thrust/detail/complex/catrig.h
/opt/cuda/include/thrust/detail/complex/catrig.h(353): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const double
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrig.h(353): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(353): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(353): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const double
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrig.h(357): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: excessive recursion at instantiation of class "cuda::std::__4::common_type<float ********************************************************************************************************************************************************************************************************, float ********************************************************************************************************************************************************************************************************>"
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************, _D1=float ********************************************************************************************************************************************************************************************************, _D2=float ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************, _D1=float *******************************************************************************************************************************************************************************************************, _D2=float *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *****************************************************************************************************************************************************************************************************, _Up=float *****************************************************************************************************************************************************************************************************, _D1=float ******************************************************************************************************************************************************************************************************, _D2=float ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float **, _Up=float **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *, _Up=float *, _D1=float **, _D2=float **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *, _Up=float *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float, _Up=float, _D1=float *, _D2=float *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float, _Up=float]" at line 219 of /opt/cuda/include/thrust/detail/complex/catrigf.h
/opt/cuda/include/cuda/std/__type_traits/common_type.h(103): error: incomplete type "cuda::std::__4::common_type<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>, cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t<cuda::std::__4::__decay_t>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" (aka "cuda::std::__4::common_type<float ********************************************************************************************************************************************************************************************************, float ********************************************************************************************************************************************************************************************************>") is not allowed
struct __common_type2 : common_type<_D1, _D2>
^
detected during:
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************, _D1=float ********************************************************************************************************************************************************************************************************, _D2=float ********************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *******************************************************************************************************************************************************************************************************, _Up=float *******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************, _D1=float *******************************************************************************************************************************************************************************************************, _D2=float *******************************************************************************************************************************************************************************************************]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float ******************************************************************************************************************************************************************************************************, _Up=float ******************************************************************************************************************************************************************************************************]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *****************************************************************************************************************************************************************************************************, _Up=float *****************************************************************************************************************************************************************************************************, _D1=float ******************************************************************************************************************************************************************************************************, _D2=float ******************************************************************************************************************************************************************************************************]" at line 111
[ 390 instantiation contexts not shown ]
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float **, _Up=float **]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float *, _Up=float *, _D1=float **, _D2=float **]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float *, _Up=float *]" at line 103
instantiation of class "cuda::std::__4::__common_type2<_Tp, _Up, _D1, _D2> [with _Tp=float, _Up=float, _D1=float *, _D2=float *]" at line 111
instantiation of class "cuda::std::__4::common_type<_Tp, _Up> [with _Tp=float, _Up=float]" at line 219 of /opt/cuda/include/thrust/detail/complex/catrigf.h
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const float
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(219): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): error: no operator "+" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex + const float
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(137): note #3322-D: number of parameters of function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &)" does not match the call
complex operator+(const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(47): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(40): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(33): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator+(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator+(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): note #3328-D: built-in operator+(, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): note #3328-D: built-in operator+(, <ptrdiff_t>) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/catrigf.h(223): note #3328-D: built-in operator+(<ptrdiff_t>, ) does not match because argument #1 does not match parameter
w = clog_for_large_values(-z) + m_ln2;
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(149): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * double
return (result * 2.0);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(149): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 2.0);
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(153): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * double
return (result * 0.5);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrt.h(153): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 0.5);
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(151): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * float
return (result * 2.0f);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(151): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 2.0f);
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(155): error: no operator "" matches these operands
operand types are: thrust::THRUST_200500_800_900_NS::complex * float
return (result * 0.5f);
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(89): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator(const T0 &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const T0& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(82): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const T1 &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const T1& y)
^
/opt/cuda/include/thrust/detail/complex/arithmetic.h(75): note #3327-D: candidate function template "thrust::THRUST_200500_800_900_NS::operator*(const thrust::THRUST_200500_800_900_NS::complex &, const thrust::THRUST_200500_800_900_NS::complex &)" failed deduction
complex<::cuda::std::__common_type_t<T0, T1>> operator*(const complex& x, const complex& y)
^
/opt/cuda/include/thrust/detail/complex/csqrtf.h(155): note #3328-D: built-in operator*(, ) does not match because argument #1 does not match parameter
return (result * 0.5f);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h(1928): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; } else { attribute((unused)) static const auto torch_warn_once_0 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h(1928): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; } else { attribute((unused)) static const auto torch_warn_once_0 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h", static_cast<uint32_t>(1928)}, ::c10::str("Named tensors and all their associated APIs are an experimental feature ", "and subject to change. Please do not use them for anything important ", "until they are released as stable."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/usr/include/c++/14.2.1/type_traits(3155): error: class "std::invoke_result<lambda ->c10::ScalarType>" has no member "type"
using invoke_result_t = typename invoke_result<_Fn, _Args...>::type;
^
detected during:
instantiation of type "std::invoke_result_t<lambda ->c10::ScalarType>" at line 34 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Optional.h
instantiation of "T c10::value_or_else(const std::optional &, F &&) [with T=c10::ScalarType, F=lambda ->c10::ScalarType]" at line 32 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h
/usr/include/c++/14.2.1/type_traits(3155): error: class "std::invoke_result<lambda ->caffe2::TypeMeta>" has no member "type"
using invoke_result_t = typename invoke_result<_Fn, _Args...>::type;
^
detected during:
instantiation of type "std::invoke_result_t<lambda ->caffe2::TypeMeta>" at line 34 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Optional.h
instantiation of "T c10::value_or_else(const std::optional &, F &&) [with T=caffe2::TypeMeta, F=lambda ->caffe2::TypeMeta]" at line 37 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h
/usr/include/c++/14.2.1/type_traits(3155): error: class "std::invoke_result<lambda ->c10::Device>" has no member "type"
using invoke_result_t = typename invoke_result<_Fn, _Args...>::type;
^
detected during:
instantiation of type "std::invoke_result_t<lambda ->c10::Device>" at line 34 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Optional.h
instantiation of "T c10::value_or_else(const std::optional &, F &&) [with T=c10::Device, F=lambda ->c10::Device]" at line 45 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h(74): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, const char *, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; } else { attribute((unused)) static const auto torch_warn_once_1 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h(74): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, const char *, bool)
if (::c10::WarningUtils::get_warnAlways()) { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; } else { attribute((unused)) static const auto torch_warn_once_1 = [&] { ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h", static_cast<uint32_t>(74)}, ::c10::str("AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. " "For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, " "If you are looking for a user facing API to enable running your inference-only " "workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code " "is under risk of producing silent wrong result in some edge cases. " "See Note [AutoDispatchBelowAutograd] for more details."), false));;; return true; }(); }
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/env.h(32): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/env.h", static_cast<uint32_t>(32)}, ::c10::str("Ignoring invalid value for boolean flag ", name, ": ", envar, "valid values are 0 or 1."), false));;
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/ATen/core/NamedTensor.h(40): error: no suitable user-defined conversion from "std::__detail::_unique_ptr_tat::NamedTensorMeta" (aka "std::unique_ptr<at::NamedTensorMeta, std::default_deleteat::NamedTensorMeta>") to "std::unique_ptr<c10::NamedTensorMetaInterface, std::default_deletec10::NamedTensorMetaInterface>" exists
return std::make_unique(HasNonWildcard, names);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/CUDAGraphsC10Utils.h(26): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t _err = cudaThreadExchangeStreamCaptureMode(&strictness); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/CUDAGraphsC10Utils.h", static_cast<uint32_t>(26)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/usr/include/c++/14.2.1/bits/shared_ptr_base.h(1347): error: reference to void is not allowed
element_type&
^
detected during:
instantiation of class "std::__shared_ptr_access<_Tp, _Lp, , > [with _Tp=void, _Lp=__gnu_cxx::_S_atomic, =, =true]" at line 1424
instantiation of class "std::__shared_ptr<_Tp, _Lp> [with _Tp=void, _Lp=__gnu_cxx::_S_atomic]" at line 175 of /usr/include/c++/14.2.1/bits/shared_ptr.h
instantiation of class "std::shared_ptr<_Tp> [with _Tp=void]" at line 421 of /home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/CUDACachingAllocator.h
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(46): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = err; if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(46)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(57): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::MaybeSetDevice(d.index()); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(57)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(112): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::GetDevice(&orig_device); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(112)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(113): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::SetDevice(device_index); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(113)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(119): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = cudaEventDestroy(cuda_event); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(119)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h(120): error: no instance of constructor "c10::Warning::Warning" matches the argument list
argument types are: (c10::UserWarning, {...}, std::string, bool)
do { const cudaError_t __err = c10::cuda::SetDevice(orig_device); if ((__builtin_expect(static_cast(__err != cudaSuccess), 0))) { auto error_unused attribute((unused)) = cudaGetLastError(); (void)error_unused; ::c10::warn(::c10::Warning( ::c10::UserWarning(), {func, "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/cuda/impl/CUDAGuardImpl.h", static_cast<uint32_t>(120)}, ::c10::str("CUDA warning: ", cudaGetErrorString(__err)), false));;; } } while (0);
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(const c10::Warning &)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(117): note #3322-D: number of parameters of function "c10::Warning::Warning(c10::Warning &&)" does not match the call
class attribute((visibility("default"))) Warning {
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(136): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, c10::detail::CompileTimeEmptyString, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(130): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, c10::SourceLocation, const char *, bool)" does not match because argument #1 does not match parameter
Warning(
^
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/include/c10/util/Exception.h(124): note #3326-D: function "c10::Warning::Warning(c10::Warning::warning_variant_t, const c10::SourceLocation &, std::string, bool)" does not match because argument #1 does not match parameter
Warning(
^
60 errors detected in the compilation of "/home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/csrc/flash_attn/src/flash_bwd_hdim128_bf16_causal_sm80.cu".
ninja: build stopped: subcommand failed.
--- stderr:
fatal: invalid gitfile format: /home/jw/store/.cache/uv/built-wheels-v3/.git
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:361: UserWarning:
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (clang++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using clang++, and then you can also use
clang++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
warnings.warn(WRONG_COMPILER_WARNING.format(
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:416: UserWarning: The detected CUDA version (12.6) has a minor version mismatch with the version that was used to compile PyTorch (12.4). Most likely this shouldn't be a problem.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:426: UserWarning: There are no clang++ version bounds defined for CUDA version 12.6
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
Emitting ninja build file /home/jw/store/.cache/uv/built-wheels-v3/pypi/flash-attn/2.6.3/jADvc_RdkbmMJ_JjuhiLf/flash_attn-2.6.3.tar.gz/build/temp.linux-x86_64-cpython-310/build.ninja...
Compiling objects...
Using envvar MAX_JOBS (2) as the number of workers...
Traceback (most recent call last):
File "", line 450, in run
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 241, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2104, in _run_ninja_build
subprocess.run(
File "/home/jw/store/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v', '-j', '2']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "", line 11, in
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 431, in build_wheel
return _build(['bdist_wheel'])
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 422, in _build
return self._build_with_temp_dir(
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 403, in _build_with_temp_dir
self.run_setup()
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 516, in run_setup
super().run_setup(setup_script=setup_script)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "", line 490, in
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/init.py", line 117, in setup
return distutils.core.setup(**attrs)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 183, in setup
return run_commands(dist)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 199, in run_commands
dist.run_commands()
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands
self.run_command(cmd)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/dist.py", line 991, in run_command
super().run_command(command)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "", line 467, in run
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_vendor/wheel/bdist_wheel.py", line 368, in run
self.run_command("build")
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/dist.py", line 991, in run_command
super().run_command(command)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/dist.py", line 991, in run_command
super().run_command(command)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 98, in run
_build_ext.run(self)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 868, in build_extensions
build_ext.build_extensions(self)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 476, in build_extensions
self._build_extensions_serial()
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 502, in _build_extensions_serial
self.build_extension(ext)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 263, in build_extension
_build_ext.build_extension(self, ext)
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 557, in build_extension
objects = self.compiler.compile(
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 681, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1784, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/home/jw/store/src/models/.venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2120, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
(.venv) ➜ models git:(main)
finnally i install flash-attn from source code, it works well
| gharchive/issue | 2024-10-24T09:22:24 | 2025-04-01T06:44:17.829545 | {
"authors": [
"syyxsxx",
"tholonia"
],
"repo": "genmoai/models",
"url": "https://github.com/genmoai/models/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1728522041 | Listen on localhost
Hi! Can we force the app to listen on local ip like 127.0.0.1? that would be great because that way we can use it under nginx reverse proxy.
Maybe you can edit the config file located in conf/config.yaml, by changing the value of field site.listen_addr from the default value of :2222 to 127.0.0.1:2222.
| gharchive/issue | 2023-05-27T07:44:42 | 2025-04-01T06:44:17.840990 | {
"authors": [
"bart3nder",
"genshen"
],
"repo": "genshen/ssh-web-console",
"url": "https://github.com/genshen/ssh-web-console/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
328488695 | Make the runc binary optional
In distributions (btw. img is now available in gentoo) we usually don't want to bundle binaries but want to specify it as a dependency. Would it be possible to add an option that disables the embedded binary building for runc?
I can make a build tag I'll place details here when done :) also woohoo Gentoo!
:+1: I would like to build my own runc. :100: to build tag
opened https://github.com/genuinetools/img/pull/108 :)
| gharchive/issue | 2018-06-01T11:46:56 | 2025-04-01T06:44:17.949789 | {
"authors": [
"frezbo",
"jessfraz",
"mrueg"
],
"repo": "genuinetools/img",
"url": "https://github.com/genuinetools/img/issues/107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
408382251 | Add new timestamp prefix actions incl all refactoring
Extends new actions for other timestamp formats. To me the format I suggest is easier to read.
I do not use the dialog box. Not nessecary to me.
What do you think?
As soon as I have free time I will try your suggestion. Thank you very much for the contribution.
| gharchive/pull-request | 2019-02-09T01:28:57 | 2025-04-01T06:44:17.968399 | {
"authors": [
"MathiasRenner",
"geobarrod"
],
"repo": "geobarrod/KDE-Services",
"url": "https://github.com/geobarrod/KDE-Services/pull/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
688837807 | Addition of Country Vocabulary
Based off ISO 3166-1
Downloaded from http://dd.eionet.europa.eu/vocabulary/common/countries/view
converted rdf -> ttl using https://rdf-translator.appspot.com/
removed validation fails around multiple prefLabels per language.
Required for migration of Party data
@nicholascar Is there an active vocab with a SPARQL endpoint that we can use instead of creating our own?
Country doesn't appear to be in LOCI
Thanks, David
DBPedia: < http://dbpedia.org/sparql>
It’s certainly got everything you could ever want in there - everything in Wikipedia - but also a lot of junk you don’t want, e.g. see:
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbr: <http://dbpedia.org/resource/>
SELECT ?country
WHERE {
?x a dbo:Country.
?x dbo:longName ?country.
}
ORDER BY ?country
LIMIT 100
I think WikiData (https://query.wikidata.org/) is much higher quality so likely to have a much better list of real countries and perhaps even a way to query for current countries (as opposed to “Gaul” etc.) but can’t drum up a query for that this week as I’m out west (Winton) on holidays, so doing this on the phone.
Good to see exactMatch across to Geonames etc.
I’ll check with the OGC: perhaps either they have a good countries vocab or perhaps you can offload such a list to them and get a SPARQL endpoint out of it.
@rob-metalinkage, does the OGC have a countries vocab?
Future VocPrez versions will do multi-instance searching so your VP, if you keep upgrading (!) will be able to search directly across OGC vocabs without any effort.
OGC does not publish a countries vocabulary itself.
There are ISO and UNstats country codes. The SIRF project I ran at CSIRO loaded and cross-mapped these - but its no longer active :-(
Country identification is a rather fraught political problem - we really need a UN solution to publishing this. SIRF identified 29 different country vocabularies or identifier schemes present in the UN data archive :-)
@KellyVance will we host this vocab on vocprez? if so, could you please let me know when you have refreshed the server so i can inform SRA of the URI to use.
@DavidCrosswellGSQ my IP must have changed again and I can't access the graph to refresh the cache. Can you please oblige @geoderekh and push this update through?
| gharchive/pull-request | 2020-08-31T01:31:33 | 2025-04-01T06:44:18.025947 | {
"authors": [
"DavidCrosswellGSQ",
"KellyVance",
"geoderekh",
"nicholascar",
"rob-metalinkage"
],
"repo": "geological-survey-of-queensland/vocabularies",
"url": "https://github.com/geological-survey-of-queensland/vocabularies/pull/310",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
817735882 | Fix schema registry parameters
SR is required for registration topic (connect), but param is currently tied to use of env var for server.
Make new env var to use/not use schema registry for data topics.
Will f/u with @pranav925 on resolution.
Closed by #6
| gharchive/issue | 2021-02-26T22:56:52 | 2025-04-01T06:44:18.049183 | {
"authors": [
"shinyfoil"
],
"repo": "geometry-labs/icon-kafka-worker",
"url": "https://github.com/geometry-labs/icon-kafka-worker/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
300938275 | Build on v4.0 fails due to broken gmock link
The fedora url for gmock-1.7.0 doesn't work anymore, resulting in build failure.
I changed this to pick gmock from a different URL in test/googlemock.mk instead, but had verification issues because of the sha1 check (bypassed it temporarily).
wget --timeout=20 http://pkgs.fedoraproject.org/repo/pkgs/gmock/gmock-1.7.0.zip/073b984d8798ea1594f5e44d85b20d66/gmock-1.7.0.zip ||
curl --connect-timeout 20 -O http://pkgs.fedoraproject.org/repo/pkgs/gmock/gmock-1.7.0.zip/073b984d8798ea1594f5e44d85b20d66/gmock-1.7.0.zip ||
echo "Warning: Unable to download gmock archive" 2>&1 &&
touch gmock-1.7.0.zip
--2018-02-28 00:45:19-- http://pkgs.fedoraproject.org/repo/pkgs/gmock/gmock-1.7.0.zip/073b984d8798ea1594f5e44d85b20d66/gmock-1.7.0.zip
Resolving pkgs.fedoraproject.org (pkgs.fedoraproject.org)... 209.132.181.4
Connecting to pkgs.fedoraproject.org (pkgs.fedoraproject.org)|209.132.181.4|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-02-28 00:45:19 ERROR 403: Forbidden.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 282 100 282 0 0 4504 0 --:--:-- --:--:-- --:--:-- 4548
Error: The gmock archive does not have the correct SHA-1 checksum
make: *** [gmock-1.7.0/VERSION] Error 255
Error: The gmock archive does not have the correct SHA-1 checksum
This issue was fixed with the following commit:
https://github.com/geopm/geopm/commit/fa3310d6e66aa15a1083386557f8dc5b43948ca9
Also if you need an archive of the version 0.4.0 source code that includes gmock you can download that here:
https://github.com/geopm/geopm/releases/download/v0.4.0/geopm-0.4.0.tar.gz
The archive we post with the release, which is the output of the
make dist
target, does not require network access to build (the gmock archive is included).
| gharchive/issue | 2018-02-28T08:50:52 | 2025-04-01T06:44:18.105570 | {
"authors": [
"cmcantalupo",
"tpatki"
],
"repo": "geopm/geopm",
"url": "https://github.com/geopm/geopm/issues/180",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
388411024 | Pass number of ranks/nodes to analysis launch() for testing
Instead of having the energy efficient test call num_rank_option() and num_node_option() on a temporary launcher, have the test pass these values through to the analysis.launch() call, which creates a launcher.
The launcher being passed to Analysis constructor could also move to parameters of launch()
analysis no longer needed for testing
| gharchive/issue | 2018-12-06T21:44:18 | 2025-04-01T06:44:18.107246 | {
"authors": [
"bakerbrandond",
"dianarg"
],
"repo": "geopm/geopm",
"url": "https://github.com/geopm/geopm/issues/444",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
851239224 | The package cannot be compiling
I try to compiling the package today.But there are some errors as:
error: failed to run custom build command for gdal-sys v0.3.1
Caused by:
process didn't exit successfully: D:\VScode\rust\read_tif\target\debug\build\gdal-sys-21e0a3b3239f1c3a\build-script-build (exit code: 101)
--- stdout
cargo:rerun-if-env-changed=GDAL_STATIC
cargo:rerun-if-env-changed=GDAL_DYNAMIC
cargo:rerun-if-env-changed=GDAL_INCLUDE_DIR
cargo:rerun-if-env-changed=GDAL_LIB_DIR
cargo:rerun-if-env-changed=GDAL_HOME
cargo:rerun-if-env-changed=GDAL_VERSION
--- stderr
thread 'main' panicked at 'windows-gnu requires gdal_i.lib to be present in either $GDAL_LIB_DIR or $GDAL_HOME\lib.', C:\Users\10717.cargo\registry\src\github.com-1ecc6299db9ec823\gdal-sys-0.3.1\build.rs:131:17
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
Did you try to download GDAL and set those environment variables (at least GDAL_HOME and/or GDAL_INCLUDE_DIR and GDAL_LIB_DIR)? Windows binaries are a bit hard to find, but https://www.gisinternals.com/release.php should be fine. Make sure to download the one that matches your C compiler.
Did you try to download GDAL and set those environment variables (at least GDAL_HOME and/or GDAL_INCLUDE_DIR and GDAL_LIB_DIR)? ~Windows binaries are a bit hard to find, but https://www.gisinternals.com/release.php should be fine. Make sure to download the one that matches your C compiler.~ Looks like you might need a MinGW version instead.
How can I set those environment variables?
I did not download GDAL. I just add ' gdal = "0.7.2" ' in Cargo.toml .
Cargo.toml:
[dependencies]
gdal = "0.7.2"
You need to either compile or install/download a GDAL version compiled with MinGW (it might be easier to install VS Community, switch to the stable-msvc Rust toolchain, then download a MSVC GDAL), then you can set the environment variables like:
SET GDAL_HOME=D:\gdal
SET GDAL_INCLUDE_DIR=D:\gdal\include
SET GDAL_LIB_DIR=D:\gdal\lib
or similar.
After we upgraded to the latest from 0.6, we found that $env:GDAL_VERSION must be set or the gdal-sys build script will just quit (Windows)
it would be great if we could add a solution to the readme
After we upgraded to the latest from 0.6, we found that $env:GDAL_VERSION must be set or the gdal-sys build script will just quit (Windows)
After we upgraded to the latest from 0.6, we found that $env:GDAL_VERSION must be set or the gdal-sys build script will just quit (Windows)
may i ask how to do that and the version is the windowns gdal version like "3.8.4" or the toml denpandenice version like "0.16.0"
| gharchive/issue | 2021-04-06T09:30:48 | 2025-04-01T06:44:18.159237 | {
"authors": [
"WHULMQ",
"jdroenner",
"kl402401",
"ktmattson-alion",
"lnicola"
],
"repo": "georust/gdal",
"url": "https://github.com/georust/gdal/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.