added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:37:49.078366
| 2021-09-24T18:07:01
|
1006707866
|
{
"authors": [
"erwinmombay",
"jridgewell",
"rcebulko",
"samouri"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3402",
"repo": "ampproject/wg-performance",
"url": "https://github.com/ampproject/wg-performance/issues/96"
}
|
gharchive/issue
|
wg-performance Status Update 2021-09-27
cc @ampproject/wg-performance
What was accomplished
Lessons Learned
What's Next
What was accomplished
Cleanup of unused performance metrics (1, 2)
esbuild exp_c: https://github.com/ampproject/amphtml/pull/36163, https://github.com/ampproject/amphtml/pull/36105
Removed CC header from worker-dom: https://github.com/ampproject/worker-dom/pull/1097
Added benchmarking to bento-compiler 1, identified most expensive piece as being worker-dom.createElement, identified 3x speedup 2
bento-compiler integration fixes: https://github.com/ampproject/bento-compiler/pull/20
relanded build-system amp_config simplification: https://github.com/ampproject/amphtml/pull/36008
Lessons Learned
Class fields are much slower than setting values in the ctor, even though they are algorithmically faster (O(1) vs O(prototype chain)). See this benchmark. bench
What's Next
Removing fid-polyfill from AMP Cache
What was accomplished
automated rollouts of Bento Compiler service (google3)
various related compliance and security dances for above
performance debugging with @samouri
Lessons Learned
verifiable build-deploy-release systems are extremely complicated
What's Next
continue trying to satisfy security requirements for rollout
possibly some core/type-checking work
cc @ampproject/wg-performance
What was accomplished
get baseline performance numbers for simple amp-story examples. This allows us to interpret possible performance gains per environment (origin vs origin + optimizer vs cache vs plain html)
Lessons Learned
What's Next
finish amp geo SSR work
write doc on amp story css inlining
write doc on changes needed to the nodeJS amp toolbox optimizer for amp story optimizations
What was accomplished
Update CLS metric to 5s-window and 1s-session-gap: https://github.com/ampproject/amphtml/pull/36147
Fix adblock CLS regerssion: https://github.com/ampproject/amphtml/pull/36157
Lessons Learned
What's Next
Finishing abstract ops for Jolt
Image metadata
|
2025-04-01T06:37:49.085975
| 2019-07-12T08:14:16
|
467281370
|
{
"authors": [
"amsik",
"yulunyang"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3403",
"repo": "amsik/liquor-tree",
"url": "https://github.com/amsik/liquor-tree/issues/135"
}
|
gharchive/issue
|
Muti children
I have two children(sections, parts) in json, how can I supposed to do?
Thank you
{
"id": "jxk2ahca",
"unit": "1",
"name": "",
"nameEn": "Stationery",
"sections": [
{
"section": "1",
"name": "",
"nameEn": "",
"file": "",
"type": "folder",
"icon": "",
"parts": [
{
"part": "1",
"name": "",
"nameEn": "",
"file": "",
"type": "file",
"icon": ""
}
]
}
}
Hi! It might help you: https://amsik.github.io/liquor-tree/#Redefine-Structure-Example
How can I set for children?
treeOptions: {
propertyNames: {
text: 'name',
children: 'sections' || 'parts'
}
},
it does not work
It will not work. You have to transform your data recursively before use tree.
|
2025-04-01T06:37:49.093182
| 2021-06-24T19:26:48
|
929547726
|
{
"authors": [
"elenaboal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3404",
"repo": "amundsen-io/amundsen",
"url": "https://github.com/amundsen-io/amundsen/pull/1274"
}
|
gharchive/pull-request
|
feat: Making notices more general
Summary of Changes
Added wildcard functionality to notices (ie if we wanted to add a notice to every table in the shadow schema, could do that with ..shadow.*) and dynamic HTML messages (ie if we wanted to have each of those notices have a different message depending on which table it was)
Why
Part of the follow up work for the Notices feature detailed on the RFC: https://github.com/amundsen-io/rfcs/blob/master/rfcs/029-resource-notices.md#future-possibilities
Mentioned on the original PR: https://github.com/amundsen-io/amundsenfrontendlibrary/pull/957#issuecomment-802943900
Tests
Added notices tests to test the functionality of the wildcard and the dynamic message.
/ptal @itstemo @Golodhros #persistence-dev
remade pr here: https://github.com/amundsen-io/amundsen/pull/1280
|
2025-04-01T06:37:49.096866
| 2023-05-21T09:54:45
|
1718441895
|
{
"authors": [
"S-ishita",
"anmode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3405",
"repo": "amupedia2021/Project-Amupedia",
"url": "https://github.com/amupedia2021/Project-Amupedia/pull/121"
}
|
gharchive/pull-request
|
Update README.md
Describe your changes
added readme badges
Screenshots - If Any (Optional)
Issue ticket number and link - If Any
Closes #108
Checklist before requesting a review
[x] I have performed a self-review of my code.
[x] Starred the repository.
[x] Added My Name to the Contributor's List.
[x] Followed the repository's Contributing Guidelines.
[x] I ran the app and tested it locally to verify that it works as expected.
Thanks @S-ishita. Nice
|
2025-04-01T06:37:49.098668
| 2018-08-30T19:20:54
|
355720223
|
{
"authors": [
"ipelupessy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3406",
"repo": "amusecode/amuse",
"url": "https://github.com/amusecode/amuse/issues/293"
}
|
gharchive/issue
|
what should is_none test for?
is_none currently results in true for none_unit or directly derived, but is sometimes used
where one really wants to test: unit.base == none.base ( instead of unit.is_none() )
(this arises when testing e.g. units.rad and units.deg)
should be revisted
lets close; if it a real reason to change it comes up its easier to fix
|
2025-04-01T06:37:49.114753
| 2020-05-25T18:33:12
|
624436227
|
{
"authors": [
"codecov-commenter",
"esiivola",
"mmahsereci"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3407",
"repo": "amzn/emukit",
"url": "https://github.com/amzn/emukit/pull/304"
}
|
gharchive/pull-request
|
bugfix: in the preferential batch BO example the acquisition function…
…s were previously minimized, not maximized
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
Merging #304 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #304 +/- ##
=======================================
Coverage 89.26% 89.26%
=======================================
Files 120 120
Lines 3932 3932
Branches 454 454
=======================================
Hits 3510 3510
Misses 329 329
Partials 93 93
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2db676b...fe23f04. Read the comment docs.
Hi Eero, are you sure this is happening? GradientAcquisitionOptimizer flips the sign of the acquisition function in its _optimize method.
Dear Maren. Thank you for taking a look! I am positive about the bugfix being the right thing to do. GradientAcquisitionOptimizer flips the sign and then minimizes the flipped function (= maximizes the original function). In the preferential batch Bayesian optimization example, all acquisition function evaluations were ALREADY returned as flipped versions (negative qEI) so we ended up minimizing the acquisition functions, which is not what we want.
Before building the wrapper for emukit, I was performing the optimization of the acquisition myself using "acq_fun_optimizer"-function (https://github.com/esiivola/emukit/blob/fe23f046b7e4b92d065bb71212a5e6f082432451/emukit/examples/preferential_batch_bayesian_optimization/pbbo/acquisitions/expectation_acquisition.py#L235), which minimized the "evaluate"-function (https://github.com/esiivola/emukit/blob/fe23f046b7e4b92d065bb71212a5e6f082432451/emukit/examples/preferential_batch_bayesian_optimization/pbbo/acquisitions/expectation_acquisition.py#L287).
However, this "evaluate" function is maximized by emukit. The bugfix flips this function inside the emukit wrapper.
I see, thanks for the explanation!
|
2025-04-01T06:37:49.152440
| 2024-03-26T13:13:48
|
2208212887
|
{
"authors": [
"brandonroberts",
"tleveque23"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3408",
"repo": "analogjs/analog",
"url": "https://github.com/analogjs/analog/issues/967"
}
|
gharchive/issue
|
Cannot find package '@ngtools/webpack' On MacOs when starting the project
Please provide the environment you discovered this bug in.
MacOs 14.4.1
NodeJs v20.11.0
Which area/package is the issue in?
create-analog
Description
I follow the Getting started guide.
yarn create analog
Go in the directory
yarn
yarn start
Please provide the exception or error you saw
failed to load config from /Users/thierryleveque/Documents/dev/testProject/analog-project/vite.config.ts
An unhandled exception occurred: Cannot find package '@ngtools/webpack' imported from<EMAIL_ADDRESS>
Other information
No response
I would be willing to submit a PR to fix this issue
[ ] Yes
[X] No
Only when using yarn, you have to explicitly add the @ngtools/webpack package to the package.json.
yarn add @ngtools/webpack --dev
We could maybe detect this when generating a new app, but I don't want to add it for everyone
And then now it is @angular-devkit/build-angular that is missing...
Adding it also fix the problem.
But this should be added in the documentation. On the getting started page.
|
2025-04-01T06:37:49.156683
| 2024-09-01T19:54:10
|
2499654988
|
{
"authors": [
"dawedawe",
"mre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3409",
"repo": "analysis-tools-dev/static-analysis",
"url": "https://github.com/analysis-tools-dev/static-analysis/pull/1614"
}
|
gharchive/pull-request
|
Add F# tools Fantomas and ionide-analyzers
[x] I have not changed the README.md directly.
Much appreciated. Thanks!
|
2025-04-01T06:37:49.198362
| 2021-04-04T16:10:41
|
849906774
|
{
"authors": [
"Artelis",
"dalgarins"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3410",
"repo": "anboralabs/intellij-firebase-highlighter",
"url": "https://github.com/anboralabs/intellij-firebase-highlighter/issues/57"
}
|
gharchive/issue
|
Array indexing with a variable
function githubExample(value) {
return request.resource.data[value];
}
The plugin highlights the "value" indexer with: '[' or number expected, got 'value'
@Artelis thanks for testing, issue will be fixed on version: 2.5.3
|
2025-04-01T06:37:49.256843
| 2017-08-30T11:32:54
|
253966638
|
{
"authors": [
"JsseL",
"jedie"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3412",
"repo": "andersinno/django-model-publisher-ai",
"url": "https://github.com/andersinno/django-model-publisher-ai/issues/8"
}
|
gharchive/issue
|
clone parler translations -> IntegrityError: Key (language_code, master_id) already exists.
I get the a IntegrityError like:
duplicate key value violates unique constraint "news_news_translation_language_code_37dcef35ba6d4d72_uniq"
DETAIL: Key (language_code, master_id)=(de, 43) already exists.
while "publish" action. Code point is here:
File "django-model-publisher-ai/publisher/models.py" in publish
122. self.clone_translations(draft_obj, publish_obj)
File "django-model-publisher-ai/publisher/models.py" in clone_translations
216. translation.save()
Any idea?
Ah! It's related to aldryn_translation_tools.models.TranslatedAutoSlugifyMixin !
The error comes if TranslatedAutoSlugifyMixin is before PublisherModel ...
Working order seems to be:
class MyModel(TranslatableModelMixin, PublisherModel, TranslatedAutoSlugifyMixin):
Sounds like you got it fixed? Closing.
Yes. I also working on a PublisherParlerManager() and PublisherParlerModel() for easier usage...
Hm. No it's not fixed :(
The slug will not set correctly if TranslatedAutoSlugifyMixin after PublisherModel ...
Any idea?
|
2025-04-01T06:37:49.339831
| 2016-10-19T20:27:08
|
184065242
|
{
"authors": [
"andrebeat",
"coveralls"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3413",
"repo": "andrebeat/scala-pool",
"url": "https://github.com/andrebeat/scala-pool/pull/12"
}
|
gharchive/pull-request
|
Close pool
Add method to close a pool, properly disposing of each pooled object, and releasing any resources associated with the pool (e.g. background timer threads).
Fixes #11.
Coverage increased (+2.2%) to 89.431% when pulling 3a2743bf316f0cc2096b63885fe0606f601a8e49 on close-pool into 9401aa0b825d0ba729d936b0f9c471976383b53c on master.
Coverage decreased (-0.7%) to 86.555% when pulling ef0c0c075c7fb20cdd6ee156752b94e8dea33925 on close-pool into 9401aa0b825d0ba729d936b0f9c471976383b53c on master.
|
2025-04-01T06:37:49.431886
| 2024-07-31T19:47:00
|
2440824469
|
{
"authors": [
"andrew-codechimp",
"rohankapoorcom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3416",
"repo": "andrew-codechimp/HA-Battery-Notes",
"url": "https://github.com/andrew-codechimp/HA-Battery-Notes/issues/1910"
}
|
gharchive/issue
|
Adjust Zooz ZSE42 battery types
Checklist
[X] I have filled out the template to the best of my ability.
[X] This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
[X] This issue is not a duplicate feature request of previous feature requests.
Is your feature request related to a problem? Please describe.
Zooz has multiple versions of the ZSE42. Older models use the CR2032 battery. Newer models use the CR2450 battery. This seems to be getting changed regularly in the device listing as people with different models fill out new device requests.
Most recently this was changed in https://github.com/andrew-codechimp/HA-Battery-Notes/issues/1496
Describe the solution you'd like
I'd like there to be a way to either differentiate between the two devices (and store both in the device listing) or to prevent updates to this device from being populated automatically into the device listing. In the current architecture, this device needs to be kept at "MANUAL"
Related discussions have happened in #1266, https://github.com/andrew-codechimp/HA-Battery-Notes/pull/1228#issuecomment-1978201980
Describe alternatives you've considered
Alternatively, users can maintain their own device listing, but that seems like a bad idea.
Additional context
N/A
This device was recently discussed here
At the moment if the make, model and hardware version are identical I cannot have the library differentiate.
I have set it to MANUAL and will keep it like that until perhaps the integration owner is able to make a differentiation with the new Model ID coming in 2024.8 of Home Assistant.
Battery Notes can only work with the data presented, raise this with the integration owner and see if you can get them to use the new Model ID, or at least have a suffix to the existing model.
Thanks for the link to the discussion section (the one place I didn't check).
Does setting it to manual prevent it from being updated automatically by a new device request issue?
The reason I opened this issue is that it used to work in manual mode for me for the past couple of months and then recently I got a new ZSE42 and it was suggesting an incorrect battery again. Maybe my library version was out of date since I see the current main branch has it set correctly.
It was changed back to Manual only a few days ago, you should get the library updated daily unless there was a problem the last time it tried to update.
I try to pay attention to device updates to sanity check them but occasionally they slip through, I may have to think about auto discarding changes to manual devices if this happens often.
I may have to think about auto discarding changes to manual devices if this happens often.
That was my suggestion with this feature request.
Thanks for the explanation!
|
2025-04-01T06:37:49.461956
| 2023-11-29T08:18:48
|
2016050738
|
{
"authors": [
"Jmennius",
"andykingit",
"ari-gen-x",
"maddocker"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3422",
"repo": "andrewjroth/requests-auth-aws-sigv4",
"url": "https://github.com/andrewjroth/requests-auth-aws-sigv4/issues/17"
}
|
gharchive/issue
|
New release with recent changes
Can we have a release that would include recent changes?
Thank you!
I second this. Is there any housekeeping needed for a new release that someone would be able to contribute toward?
+1
+1 :pray:
@andrewjroth ping, releasing shouldn't take too much time 🙏🏼
|
2025-04-01T06:37:49.477345
| 2017-04-08T22:55:26
|
220433218
|
{
"authors": [
"andrey-git",
"joshluster",
"mihalski"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3423",
"repo": "andrey-git/home-assistant-custom-ui",
"url": "https://github.com/andrey-git/home-assistant-custom-ui/issues/5"
}
|
gharchive/issue
|
Is there any possibility of a group dimmer custom control?
I have a number of lights which comprise of multiple bulbs such as this:
Umm.. Don't know why it rotated 90 degrees.. Imagine them hanging off the ceiling rather than a wall :P
Is there any possibility of a light group dimming custom control that works in the same manner as the individual light controls do?
Handing this in Custom UI would be complicated.
Currently HA only supports custom state-cards, not custom cards.
The functionality is also not present in the more-info window.
What should the non-dimmable switches do when a group slider is moved?
What should the slider show if the slights have different value?
I think the best solution for this is a custom backend component.
Or maybe a new light platform that controls and optionally hides other lights.
Actually I see that more-info already has this:
If all group members are of the same domains it shows controls for the first member and changing them affects all members.
Because the more-info already has this, is this something that can be implemented?
Like I allow a single slider on a light state-card to control brightness - you would like a single slider on a group state-card to control brightness of all group members?
Correct, I just discovered that I can have lights I have grouped in hue show up as a light instead of a group.. But for rooms that have multiple lights from different sources this option is still available in the more info area.
|
2025-04-01T06:37:49.503167
| 2020-01-02T08:31:12
|
544487769
|
{
"authors": [
"Ken-HUST",
"reox"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3424",
"repo": "androguard/androguard",
"url": "https://github.com/androguard/androguard/issues/777"
}
|
gharchive/issue
|
Can Androguard generate CFG for a pure java method directly?
Given the source code of a java method, can Androguard generate the cfg of the method directly?
short answer: no
long answer: probably... The CFG is generated using the basic blocks - which are generated from the AST. So, if you manage to create a AST for the Java code, you can probably generate the CFG from it.
|
2025-04-01T06:37:49.516392
| 2018-09-20T18:08:27
|
362301854
|
{
"authors": [
"XinyueZ",
"neiljaywarner",
"tiembo"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3425",
"repo": "android/sunflower",
"url": "https://github.com/android/sunflower/issues/191"
}
|
gharchive/issue
|
enhancement: Retrofit with jake wharton's coroutines adapter
very simple
try/catch http exception/catch regular exception would be super nice
it is so nice
https://github.com/JakeWharton/retrofit2-kotlin-coroutines-adapter
This demo loads local data, however, I am also looking forward a remote data feature in the future.
i see your point... But Jake is there at google, Retrofit is in jetpack guide, and coroutines are leaving experimental in 1.3, and there's several articles about the merit of moving from Rx to Coroutines, even (i think) from Chris Banes.. I think..
@neiljaywarner Yeah, RX should be replaced by coroutine in the future Android development which is a point of my view. Actually, there's a corner of the current codebase which can be used with coroutine.
https://github.com/googlesamples/android-sunflower/blob/master/app/src/main/java/com/google/samples/apps/sunflower/utilities/AppExecutors.kt#L27
@neiljaywarner yes, there are plans to incorporate retrofit and coroutines into Sunflower. Keep an eye on #208
Added in #625
|
2025-04-01T06:37:49.541612
| 2020-08-31T08:52:42
|
689038969
|
{
"authors": [
"nic0lette",
"santosh-kumar-kaushal"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3426",
"repo": "android/uamp",
"url": "https://github.com/android/uamp/issues/400"
}
|
gharchive/issue
|
Progress bar displays with no network logo when internet connection is down.
Actual: Progress bar display with no network logo.
Expected: Progress bar should not be shown.
Fixed with #411
|
2025-04-01T06:37:49.544835
| 2017-12-15T13:42:33
|
282431139
|
{
"authors": [
"andy-5",
"greggzst"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3427",
"repo": "andy-5/wslgit",
"url": "https://github.com/andy-5/wslgit/issues/11"
}
|
gharchive/issue
|
VSCode 1.19 and git repository constantly reopening
The issue is that when I either open or close a file or even change a file tab I'm currently working on my git repository is reopened.
Here I changed only two files but it says I've got 6 changes:
I just uploaded a new release that should fix this (based on changes from the fork https://github.com/purocean/wslproxy).
|
2025-04-01T06:37:49.561799
| 2022-01-12T17:13:29
|
1100590677
|
{
"authors": [
"Bear-03",
"andylokandy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3428",
"repo": "andylokandy/byte",
"url": "https://github.com/andylokandy/byte/pull/7"
}
|
gharchive/pull-request
|
Fix "Issus" typo in README
Also move the dot after "Pull requests" inside the bold text just like it is in the "Issues" section.
Thanks!
|
2025-04-01T06:37:49.567974
| 2021-04-27T09:05:27
|
868636495
|
{
"authors": [
"Grokzen",
"JasonDevops"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3429",
"repo": "andymccurdy/redis-py",
"url": "https://github.com/andymccurdy/redis-py/pull/1477"
}
|
gharchive/pull-request
|
Add a client info from func for parse_slowlog_get
Pull Request check-list
Please make sure to review and check all of these items:
[ ] Does $ tox pass with this change (including linting)?
[ ] Does travis tests pass with this change (enable it first in your forked repo and wait for the travis build to finish)?
[x] Is the new or changed code fully tested?
[ ] Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Description of change
Then I collection slow-log of redis,
I find that the logs lack source client info, like this:
r = redis.StrictRedis()
r.config_set(
"slowlog-max-len", 1000,
"slowlog-log-slower-than", 1 * 10
)
s = r.slowlog_get()
print(s)
response
[{'id': 17, 'start_time':<PHONE_NUMBER>, 'duration': 23, 'command': b'CONFIG SET slowlog-max-len 1000'}, {'id': 16, 'start_time':<PHONE_NUMBER>, 'duration': 35, 'command': b'SLOWLOG GET'}
We need get client info,for redisc-cli:
Please provide a description of the change here.
Ain't those fields redis-server 4.0 and above only? Taking this code and running this slowlog method on anything before that will then break as it is not written in backwards compatible fashion
|
2025-04-01T06:37:49.630514
| 2024-08-07T15:19:38
|
2453753149
|
{
"authors": [
"givison"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3430",
"repo": "angelolab/ark-analysis",
"url": "https://github.com/angelolab/ark-analysis/issues/1150"
}
|
gharchive/issue
|
Can't run template notebooks "ModuleNotFoundError: No module named 'torch.utils._import_utils'"
Hi Angelo folks!
Trying to get started running the analysis notebooks. Followed the install directions in the readme, tried cloning & using conda and installing from pip and getting the same error when I try to load the libraries at the top of both notebook 1 & notebook 2:
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 12
8 from matplotlib import rc_file_defaults
10 from ark.phenotyping import (pixel_cluster_utils, pixel_meta_clustering,
11 pixel_som_clustering, pixie_preprocessing)
---> 12 from ark.utils import data_utils, example_dataset, plot_utils
13 from ark.utils.metacluster_remap_gui import (MetaClusterGui,
14 colormap_helper,
15 metaclusterdata_from_files)
File ~/miniconda3/envs/ark_env/lib/python3.10/site-packages/ark/utils/data_utils.py:25
23 from anndata.experimental import AnnCollection
24 from anndata.experimental.multi_files._anncollection import ConvertType
---> 25 from torchdata.datapipes.iter import IterDataPipe
26 from typing import Iterator, Optional
27 try:
File ~/miniconda3/envs/ark_env/lib/python3.10/site-packages/torchdata/datapipes/init.py:11
7 from torch.utils.data import DataChunk, functional_datapipe
9 from torchdata import _extension # noqa: F401
---> 11 from . import iter, map, utils
13 all = ["DataChunk", "functional_datapipe", "iter", "map", "utils"]
16 from torchdata import deprecation_warning
File ~/miniconda3/envs/ark_env/lib/python3.10/site-packages/torchdata/datapipes/iter/init.py:79
68 from torchdata.datapipes.iter.transform.callable import (
69 BatchAsyncMapperIterDataPipe as BatchAsyncMapper,
70 BatchMapperIterDataPipe as BatchMapper,
(...)
76 ThreadPoolMapperIterDataPipe as ThreadPoolMapper,
77 )
78 from torchdata.datapipes.iter.util.bz2fileloader import Bz2FileLoaderIterDataPipe as Bz2FileLoader
---> 79 from torchdata.datapipes.iter.util.cacheholder import (
80 EndOnDiskCacheHolderIterDataPipe as EndOnDiskCacheHolder,
81 InMemoryCacheHolderIterDataPipe as InMemoryCacheHolder,
82 OnDiskCacheHolderIterDataPipe as OnDiskCacheHolder,
83 )
84 from torchdata.datapipes.iter.util.combining import (
85 IterKeyZipperIterDataPipe as IterKeyZipper,
86 MapKeyZipperIterDataPipe as MapKeyZipper,
87 RoundRobinDemultiplexerIterDataPipe as RoundRobinDemultiplexer,
88 UnZipperIterDataPipe as UnZipper,
89 )
90 from torchdata.datapipes.iter.util.cycler import CyclerIterDataPipe as Cycler, RepeaterIterDataPipe as Repeater
File ~/miniconda3/envs/ark_env/lib/python3.10/site-packages/torchdata/datapipes/iter/util/cacheholder.py:24
21 except ImportError:
22 portalocker = None
---> 24 from torch.utils._import_utils import dill_available
25 from torch.utils.data.datapipes.utils.common import _check_unpickable_fn
27 from torch.utils.data.graph import traverse_dps
ModuleNotFoundError: No module named 'torch.utils._import_utils'
Quick update here, I was able to unblock myself by installing torchdata 0.7.1! Seems like an incompatibility between torchdata 0.8.0 (newest release as of a month ago) and the torch==2.0.0 pin here
|
2025-04-01T06:37:49.647582
| 2016-03-16T18:49:30
|
141367047
|
{
"authors": [
"Nixtron",
"frederic",
"ltfish",
"rhelmot",
"zardus"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3431",
"repo": "angr/simuvex",
"url": "https://github.com/angr/simuvex/issues/12"
}
|
gharchive/issue
|
AttributeError: 'long' object has no attribute 'size'
When I use the CONCRETIZE option, the following error happens:
Traceback (most recent call last):
File "./angr-test.py", line 142, in <module>
dot = analyze_paths(p, p.addr, dot)
File "./angr-test.py", line 40, in analyze_paths
parent_p.step()
File "/home/user/tools/code/angr-dev/angr/angr/path.py", line 432, in step
self._make_sim_run()
File "/home/user/tools/code/angr-dev/angr/angr/path.py", line 457, in _make_sim_run
self._run = self._project.factory.sim_run(self.state, **self._run_args)
File "/home/user/tools/code/angr-dev/angr/angr/factory.py", line 123, in sim_run
r = self.sim_block(state, addr=addr, **block_opts)
File "/home/user/tools/code/angr-dev/angr/angr/factory.py", line 69, in sim_block
last_stmt=last_stmt)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/irsb.py", line 62, in __init__
self._handle_irsb()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/irsb.py", line 92, in _handle_irsb
self._handle_statements()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/irsb.py", line 208, in _handle_statements
s_stmt = translate_stmt(self.irsb, stmt_idx, self.last_imark, self.state)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/__init__.py", line 31, in translate_stmt
s.process()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/base.py", line 26, in process
self._execute()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/wrtmp.py", line 6, in _execute
data = self._translate_expr(self.stmt.data)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/base.py", line 36, in _translate_expr
e = translate_expr(expr, self.imark, self.stmt_idx, self.state)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/__init__.py", line 14, in translate_expr
e.process()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/base.py", line 38, in process
self._execute()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/get.py", line 24, in _execute
self._post_process()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/base.py", line 59, in _post_process
if self.expr.size() != self.size_bits():
AttributeError: 'long' object has no attribute 'size'
In https://github.com/angr/simuvex/blob/master/simuvex/vex/expressions/base.py#L94 make_concrete(self), the expr member is replaced by a long variable. So all subsequent calls to expr.to_bv() or expr.size() won't work.
Maybe we should create an BVV object :
self.expr = self.state.se.BVV(concrete_value, self.size_bits())
But I am not sure if we should add contraints (#L92 & #L93) based on BVV object or the original long value.
It's definitely a bug. Can you please print out self.expr?
I've added two debug prints in https://github.com/angr/simuvex/blob/master/simuvex/vex/expressions/base.py#L94
def _post_process(self):
if self._post_processed: return
self._post_processed = True
if o.SIMPLIFY_EXPRS in self.state.options:
self.expr = self.state.se.simplify(self.expr)
self.state.add_constraints(*self._constraints)
print "_post_process(before make_concrete) self.expr = ", self.expr
if self.state.se.symbolic(self.expr) and o.CONCRETIZE in self.state.options:
self.make_concrete()
print "_post_process(after make_concrete) self.expr = ", self.expr
if self.expr.size() != self.size_bits():
raise SimExpressionError("Inconsistent expression size: should be %d but is %d" % (self.size_bits(), self.expr.size()))
Output:
_post_process(before make_concrete) self.expr = <BV32 0x7fff0000>
_post_process(after make_concrete) self.expr = <BV32 0x7fff0000>
_post_process(before make_concrete) self.expr = <BV32 0x7fff0000>
_post_process(after make_concrete) self.expr = <BV32 0x7fff0000>
_post_process(before make_concrete) self.expr = <BV32 0x24>
_post_process(after make_concrete) self.expr = <BV32 0x24>
_post_process(before make_concrete) self.expr = <BV32 0x7ffeffdc>
_post_process(after make_concrete) self.expr = <BV32 0x7ffeffdc>
_post_process(before make_concrete) self.expr = <BV32 0x7ffeffdc>
_post_process(after make_concrete) self.expr = <BV32 0x7ffeffdc>
_post_process(before make_concrete) self.expr = <BV32 0x7fff0000>
_post_process(after make_concrete) self.expr = <BV32 0x7fff0000>
_post_process(before make_concrete) self.expr = <BV32 0x4>
_post_process(after make_concrete) self.expr = <BV32 0x4>
_post_process(before make_concrete) self.expr = <BV32 0x7ffefffc>
_post_process(after make_concrete) self.expr = <BV32 0x7ffefffc>
_post_process(before make_concrete) self.expr = <BV32 BVS('reg_40_0_32', None, None, None, False)>
_post_process(after make_concrete) self.expr = 0
Traceback (most recent call last):
File "./angr-test.py", line 147, in <module>
dot = analyze_paths(p, p.addr, dot)
File "./angr-test.py", line 40, in analyze_paths
parent_p.step()
File "/home/user/tools/code/angr-dev/angr/angr/path.py", line 432, in step
self._make_sim_run()
File "/home/user/tools/code/angr-dev/angr/angr/path.py", line 457, in _make_sim_run
self._run = self._project.factory.sim_run(self.state, **self._run_args)
File "/home/user/tools/code/angr-dev/angr/angr/factory.py", line 123, in sim_run
r = self.sim_block(state, addr=addr, **block_opts)
File "/home/user/tools/code/angr-dev/angr/angr/factory.py", line 69, in sim_block
last_stmt=last_stmt)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/irsb.py", line 62, in __init__
self._handle_irsb()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/irsb.py", line 92, in _handle_irsb
self._handle_statements()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/irsb.py", line 208, in _handle_statements
s_stmt = translate_stmt(self.irsb, stmt_idx, self.last_imark, self.state)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/__init__.py", line 31, in translate_stmt
s.process()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/base.py", line 26, in process
self._execute()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/wrtmp.py", line 6, in _execute
data = self._translate_expr(self.stmt.data)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/statements/base.py", line 36, in _translate_expr
e = translate_expr(expr, self.imark, self.stmt_idx, self.state)
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/__init__.py", line 14, in translate_expr
e.process()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/base.py", line 38, in process
self._execute()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/get.py", line 24, in _execute
self._post_process()
File "/home/user/tools/code/angr-dev/simuvex/simuvex/vex/expressions/base.py", line 59, in _post_process
if self.expr.size() != self.size_bits():
AttributeError: 'long' object has no attribute 'size'
Hello, do you have a testcase that triggers this?
Not sure with the latest updates, but with the latest version from pip atleast:
import simuvex
import pyvex
import archinfo
s = simuvex.SimState()
s.options.add(simuvex.o.CONCRETIZE)
irsb = pyvex.IRSB("488b06".decode('hex'),0,archinfo.ArchAMD64())
si = simuvex.SimIRSB(s,irsb)
This does, in fact, happen with the latest version! I'll look into it right now.
...yep, the CONCRETIZE option is completely 100% broken and will always make this happen. One sec, I'll push a fix.
Sorry we took so long on this, the fix was incredibly simple.
But I am not sure if we should add contraints (#L92 & #L93) based on BVV object or the original long value.
You were absolutely on the right track! The answer to your question is that it doesn't matter, claripy will automatically coerce arguments to the right type if one of them is a python int and the other is a bitvector.
|
2025-04-01T06:37:49.655328
| 2016-02-04T14:22:09
|
131362210
|
{
"authors": [
"Pirozek",
"jacobscarter"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3432",
"repo": "angular-ui-tree/angular-ui-tree",
"url": "https://github.com/angular-ui-tree/angular-ui-tree/issues/708"
}
|
gharchive/issue
|
Horizontal sorting of last tree level - is it possible?
Hi, I am creating an app where main component is tree view to which user drags components from toolbox. Everyting works fine, but I need to allow user sort items on last level of tree (as marked on image - http://prntscr.com/9yx30m)
Is it possible to define one level of tree as horizontal sort? I saw undocumented feature data-horizontal, but it doesn't seem to be working. I also tried to use another library for sorting (like ng-sortable) but it collides with angular-ui-tree directives (scopes maybe?).
I don't need complete solution, just hint how should I do this, or if you encountered similar problem, what was your solution?
Closing this issue due to inactivity. See #815 for more information.
Please create a new issue is this still occurs with the current version of the component.
|
2025-04-01T06:37:49.657270
| 2014-01-30T12:46:17
|
26595633
|
{
"authors": [
"BobbieBarker",
"nebril"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3433",
"repo": "angular-ui/bootstrap",
"url": "https://github.com/angular-ui/bootstrap/issues/1703"
}
|
gharchive/issue
|
$position returns wrong results for scaled components
I am scaling my widget with css:transform because it would be inserted via iframe to other sites.
I am using tooltips in it, but when I started scaling I figured that their positioning was wrong. I dug a little and I think, that the $position service is responsible for the bug.
Here is plunk showing what I am talking about:
http://plnkr.co/edit/LOzAdR64snNulWnatl8b?p=preview
This issue is well over a year old and was against version .10
This issue is well over a year old and was against version .10
|
2025-04-01T06:37:49.660679
| 2016-01-24T19:56:57
|
128420780
|
{
"authors": [
"shyamal890",
"wesleycho"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3434",
"repo": "angular-ui/bootstrap",
"url": "https://github.com/angular-ui/bootstrap/issues/5337"
}
|
gharchive/issue
|
BUG: Popover closes itself even though popover-is-open is set to true
Following are the options set:
uib-popover="I am activated manually"
popover-is-open="true"
popover-trigger="none"
popover-append-to-body="true"
When a route change occurs(ui-router) the popover closes itself.
Version Info:
Angular UI Bootstrap version: 1.1.0
UI-Router version: 0.2.15
Angularjs version: 1.4.7
Demonstration in PLUNKER
Can anyone please have a look at this: @wesleycho @icfantv
Looks like the culprit is https://github.com/angular-ui/bootstrap/blob/master/src/tooltip/tooltip.js#L544-L553 - this seems like odd code to me, and history seems to indicate that it is old as 041261b5366aa44a0c0314e8262c91d1d867f4d1 is the original commit.
I'm in favor of just removing this outright, the scope.$on('$destroy', ...) listener should be enough to catch this.
|
2025-04-01T06:37:49.664466
| 2015-05-13T14:47:43
|
76010654
|
{
"authors": [
"harshapps",
"iosdev-republicofapps",
"jkevingutierrez",
"malavancs",
"minutephp",
"ramiy",
"vickyrathee"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3435",
"repo": "angular-ui/ui-ace",
"url": "https://github.com/angular-ui/ui-ace/issues/109"
}
|
gharchive/issue
|
Placeholder text?
Hi,
How do you add placeholder text to empty editors?
This Stackoverflow answer here shows one way to do it
Will it be a good idea to incorporate this into the library? What are your thoughts?
Thanks
+1
+1
+1
+1
Any ETA? Thanks.
+1
+1
+1
|
2025-04-01T06:37:49.667019
| 2014-06-19T13:54:53
|
36080226
|
{
"authors": [
"eddiemonge",
"websirnik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3436",
"repo": "angular-ui/ui-router",
"url": "https://github.com/angular-ui/ui-router/issues/1150"
}
|
gharchive/issue
|
Dynamic-params and Back button
Testing out "dynamic-params" branch.
My use-case is slideshow.
.state('presentation', {
url: '/:presentation_id',
templateUrl: '/partials/presentation.html',
controller: "PresentationCtrl"
})
.state('presentation.slide', {
url: '/:slide_id',
templateUrl: '/partials/slide.html',
controller: "SlideCtrl",
params: {
slide_id: {
value: null,
dynamic: true
},
presentation_id: { // For some reason I need this here for dynamic params to work. It's fine, it's not the question
value: null,
dynamic: false
},
}
})
From /presentation-id I open /presentation-id/slide-1,
then /presentation-id/slide-1 -> /presentation-id/slide-2
Then when I press the Back button in the browser from /presentation-id/slide-2 I get back to /presentation-id.
I would expect to go back to /presentation-id/slide-1.
https://gist.github.com/eddiemonge/f6a58169c2846731a1eb
|
2025-04-01T06:37:49.669177
| 2013-10-14T13:35:01
|
20956717
|
{
"authors": [
"ADmad",
"eddiemonge"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3437",
"repo": "angular-ui/ui-router",
"url": "https://github.com/angular-ui/ui-router/issues/504"
}
|
gharchive/issue
|
Url params leaking to unrelated states.
Using the angular-1.2 branch I have setup states/routes similar to this:
$stateProvider.
state('foo', {
url: '/foo/:date',
templateUrl: 'views/foo.html',
controller: 'FooCtrl'
}).
state('bar', {
url: '/bar/:date',
templateUrl: 'views/bar.html',
controller: 'BarCtrl'
});
I have menu links:
<a ui-sref="foo">Foo</a> //generates href="/#foo/"
<a ui-sref="bar">Bar</a> //generates href="/#bar/"
Assuming I am on url #/foo/2013 if i clik the link for "Bar" it's takes me to #/bar/2013 while i expect it to take me to url #/bar/. The route param of unrelated states should not be carried over.
https://gist.github.com/eddiemonge/f6a58169c2846731a1eb
|
2025-04-01T06:37:49.682998
| 2015-08-03T08:46:55
|
98698727
|
{
"authors": [
"Nadrendion",
"RodolpheGohard",
"budhajeewa",
"randikagit",
"user378230"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3438",
"repo": "angular-ui/ui-select",
"url": "https://github.com/angular-ui/ui-select/issues/1120"
}
|
gharchive/issue
|
Only a object property can be bound to model
The following works:
HTML:
<ui-select
data-ng-model="foo.month"
data-ng-change="change()"
>
...
</ui-select>
JS:
$scope.foo = {month:null};
$scope.change = function () {
console.log($scope.foo.month); //"buz", the expected output.
};
But following doesn't:
HTML:
<ui-select
data-ng-model="bar"
data-ng-change="change()"
>
...
</ui-select>
JS:
$scope.bar = null;
$scope.change = function () {
console.log($scope.bar); //undefined
};
Why only object properties can be bound to model? This is not consistent with native Angular behaviour.
Following StackOverflow question address this issue as well: http://stackoverflow.com/questions/25937098/ng-model-is-not-getting-changed-in-ui-select
I can confirm this.When assigned a scope variable to the ui-select model it gives undefined.
Isn't that the common Angularjs-way?
If you try to assign a value to a variable in a scope that doesn't have that variable, the variable will be created on the current scope. This is happening even if the variable is present in any parent scope.
But if a parent variable is declared as an object and the object contains a variable that you try to assign a new value to in a childscope, then the parent object will be changed.
@Nadrendion I don't think it's the "common Angularjs-way".
As explained above, the variable is defined in the $scope beforehand. The only difference is that it works when the variable is a object ($scope.bar = {month:null}), and it doesn't work when the variable is just null ($scope.bar = null).
I don't think this is "common Angularjs-way" because both works when we use plain ol' <select>.
@budhajeewa A is usually in the same scope as the scope variable, but here you have a child scope.
Let's say you have this layout:
{{bar}}
{{bar}}
Both printouts will print the same value, but if you run some function in the childCtrl that assigns a new value to bar then only the inner bar will change.
@Nadrendion : What does that have to do with the issue I explained above?
See https://help.github.com/articles/github-flavored-markdown/#fenced-code-blocks on how to use code blocks in GitHub Flavored Markdown.
The ui-select has its own scope.
So for reading a parent variable that you send in, that isn't declared as a variable of an object, it works just fine, but when the ui-select controller is modifying the variable, it will only create a new variable with that name on its own isolated scope.
That is why you need to refer to the variable in an object from your own scope when sending it to the ui-select for modification.
Nope, that doesn't explain why $scope.foo.bar works but $scope.foo doesn't when assigned to as the ui-select's model.
@budhajeewa I would like to know why it doesn't explain it, but that is probably something that should be explained outside of this context.
@Nadrendion : Both foo.month and bar are variables in the same scope. Why would ui-select be able to bind to one but not other? The only difference is that foo.month is an object property and bar a plain ol' variable.
select works with either. So, the issue is with ui-select.
It is not the scope itself that is the issue, but the variable location.
No matter if you have your own parent and child-scopes or uses the isolated scope of the ui-select as a childscope it will work the same as soon as you involve a childscope and try to modify a parent scope variable.
No new scope is introduced when using a regular <select>so that is why there is no issue with that.
This is all I know:
I can bind ui-select's model to a foo.month, given that $scope.foo = { month: null };.
I can not bind ui-select's model to bar, given that $scope.bar = null.
All I see is ui-select being able to bind to a object property but not to a variable, both in same scope.
yes, and all I know is that the behavior you are experiencing is working as intended.
I think we will have to agree to disagree on this matter.
I think ui-select should adopt the method how the plain select works.
I'm not sure I fully understand the object of the debate, but this is clearly a bug that needs fixing, binding ng-model to the inside scope doesn't make any sense.
@RodolpheGohard : You can safely ignore the debate, which I think was not necessary.
Closing, as ng-model requires a dot as per http://stackoverflow.com/questions/17606936/angularjs-dot-in-ng-model and the ui-select wiki.
|
2025-04-01T06:37:49.686534
| 2017-06-14T13:42:49
|
235883775
|
{
"authors": [
"aurelienlt"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3439",
"repo": "angular-ui/ui-select",
"url": "https://github.com/angular-ui/ui-select/issues/2013"
}
|
gharchive/issue
|
buggy focus behavior if no choice result when append-to-body="true" with ngAnimate
When append-to-body="true" and ngAnimate are present, the focus of the search input field is not working if there is no choice result. Clicking anywhere will then select this input.
I've located the issue in uiSelectController.js:
if (ctrl.items.length > 0) {
ctrl.$animate.on('enter', container[0], animateHandler);
} else {
ctrl.$animate.on('removeClass', searchInput[0], animateHandler);
}
'removeClass' is never fired, thus the function animateHandler is never called. Calling directly animateHandler temporarily fixes the issue, but I don't think this is an acceptable solution.
if (ctrl.items.length > 0) {
ctrl.$animate.on('enter', container[0], animateHandler);
} else {
// ctrl.$animate.on('removeClass', searchInput[0], animateHandler);
animateHandler(searchInput[0], 'start');
}
Here is a minimal plunkr to test the behavior: http://plnkr.co/edit/D1A7L1dbJCNLQgDR8Kfx?p=preview
It also seems that the other event, 'enter', is buggy since it is called once per choice, causing focusSearchInput being called repeatedly (which can last for several seconds if there are many choices).
|
2025-04-01T06:37:49.689255
| 2020-10-17T23:31:13
|
723867633
|
{
"authors": [
"SrinivasKothapalli",
"alan-agius4"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3440",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/issues/19110"
}
|
gharchive/issue
|
Application from Angular 8.2 to Angular 10 upgrade
Error :
ng Serve
Unhandled Promise rejection: Component Component1 is not part of any NgModule or the module has not been imported into your module.
Component Component2 is not part of any NgModule or the module has not been imported into your module.
all Componenets are not getting included or identified
Inclueded all into Modules and every thing was working as expected in Angular 8,2, But once after upgrade getting above issue while loading in Browser but not on Build
Please help
Thanks for reporting this issue. However, you didn't provide sufficient information for us to understand and reproduce the problem. Please check out our submission guidelines to understand why we can't act on issues that are lacking important information.
If the problem persists, please file a new issue and ensure you provide all of the required information when filling out the issue template.
|
2025-04-01T06:37:49.698688
| 2021-03-16T01:05:30
|
832318782
|
{
"authors": [
"alan-agius4",
"light8822"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3441",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/issues/20283"
}
|
gharchive/issue
|
Error: TypeError: Cannot read property 'createStringLiteral' of undefined
🐞 Bug report
Command (mark with an x)
[ ] new
[X ] build
[X ] serve
[ ] test
[ ] e2e
[ ] generate
[ ] add
[ ] update
[ ] lint
[ ] extract-i18n
[ ] run
[ ] config
[ ] help
[ ] version
[ ] doc
Is this a regression?
Yes, the previous version in which this bug was not present was: ....
I was using the command ionic build and this error came:
Description
Error: TypeError: Cannot read property 'createStringLiteral' of undefined
at replacementVisitor (C:\ionic5_capacitor\node_modules@ngtools\webpack\src\transformers\import_factory.js:186:31)
at visitNodes (C:\ionic5_capacitor\node_modules\typescript\lib\typescript.js:73088:48)
at Object.visitEachChild (C:\ionic5_capacitor\node_modules\typescript\lib\typescript.js:73266:156)
A clear and concise description of the problem...
The Ionic App using Angular CLI doesn't compile
🔬 Minimal Reproduction
🔥 Exception or Error
Error: TypeError: Cannot read property 'createStringLiteral' of undefined
at replacementVisitor (C:\ionic5_capacitor\node_modules\@ngtools\webpack\src\transformers\import_factory.js:186:31)
at visitNodes (C:\ionic5_capacitor\node_modules\typescript\lib\typescript.js:73088:48)
at Object.visitEachChild (C:\ionic5_capacitor\node_modules\typescript\lib\typescript.js:73266:156)
🌍 Your Environment
Ionic:
Ionic CLI : 5.0.0 (C:\Users\Usuario\AppData\Roaming\npm\node_modules\ionic)
Ionic Framework : @ionic/angular 5.1.1
@angular-devkit/build-angular : 0.1102.4
@angular-devkit/schematics : 9.1.6
@angular/cli : 9.1.6
@ionic/angular-toolkit : 2.3.3
Capacitor:
Capacitor CLI : 2.4.7
@capacitor/core : 2.4.7
Cordova:
Cordova CLI : 9.0.0<EMAIL_ADDRESS> Cordova Platforms : not available
Cordova Plugins : not available
Utility:
cordova-res : 0.15.3
native-run : not installed
System:
NodeJS : v14.16.0 (C:\Program Files\nodejs\node.exe)
npm : 6.14.8
OS : Windows 10
Anything else relevant?
Hi,
It looks like you are using<EMAIL_ADDRESS>which is only compatible with Angular and Angular CLI 11. Please downgrade this package to 0.901.15
If the problem persists, please file a new issue and ensure you provide all of the required information when filling out the issue template including a minimal reproduction.
You can read here why this is needed. A good way to make a minimal repro is to create a new app via ng new repro-app and adding the minimum possible code to show the problem. Then you can push this repository to github and link it here.
This might be related to your directory structure so its really important to get an accurate repro to diagnose this.
Thank You it worked perfectly
|
2025-04-01T06:37:49.708308
| 2021-06-11T20:16:23
|
919173830
|
{
"authors": [
"alan-agius4",
"arobinson",
"blitzionian"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3442",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/issues/21122"
}
|
gharchive/issue
|
Angular 12 ng serve does not update libraries
🐞 Bug report
Command (mark with an x)
[ ] new
[ ] build
[x] serve
[ ] test
[ ] e2e
[ ] generate
[ ] add
[ ] update
[ ] lint
[ ] extract-i18n
[ ] run
[ ] config
[ ] help
[ ] version
[ ] doc
Is this a regression?
Yes, Angular 11.0.5 correctly responded to updates in a symlinked shared library
Description
When using `ng serve` and having a shared library symlinked to the application, updates to that application are not brought in and the serve must be quit and restarted for each change. Due to really slow builds with large projects, this is a show stopping bug for us in Angular 12
🔬 Minimal Reproduction
In tab/console 1:
git clone https://github.com/arobinson/AngularLibraryUpdateIssue
cd AngularLibraryUpdateIssue/ExampleLibraries
npm i
ng b --watch
In tab/console 2:
cd AngularLibraryUpdateIssue/AngularLibraryUpdateIssue/
npm i
ng s -o
Open http://localhost:4200/ in a browser
Verify page rendered as:
Example library:
Library component version 1
Open ExampleLibraries/projects/example-library/src/lib/example-library.component.ts
Change versionNumber from 1 to 2:
export class ExampleLibraryComponent implements OnInit {
versionNumber = 2
See that the library is rebuilt
See that ng serve refreshes the page
But the version number is still 1, the new changes were not included in the updated Javascript
🔥 Exception or Error
🌍 Your Environment
Angular CLI: 12.0.3
Node: 14.17.0
Package Manager: npm 6.14.13
OS: darwin x64
Angular:
...
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1200.3 (cli-only)
@angular-devkit/core 12.0.3 (cli-only)
@angular-devkit/schematics 12.0.3 (cli-only)
@schematics/angular 12.0.3 (cli-only)
Anything else relevant?
Have considered the same problem. My temporary workaround is to set env variable 'export NG_BUILD_CACHE=false'.
In #20962 is already a discussion about this topic, which seems to be the same problem.
This is expected because node_modules are considered immutable in Webpack 5 unless the "version" property changes. If you want to opt-out from this behaviour you need to use NG_BUILD_CACHE=false environment variable.
Duplicate of #20962
|
2025-04-01T06:37:49.711667
| 2024-01-23T17:16:25
|
2096587645
|
{
"authors": [
"alan-agius4",
"everton-auth"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3443",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/issues/26934"
}
|
gharchive/issue
|
Global styles for angular libraries
Command
config
Description
Utilizing global styles in the Angular library packaged by ng-packgr at the moment is impossible or at least missing documentation.
It would be nice to have global styles as in normal angular applications so that developers won't need to import, e.g. bootstrap into each component separately.
Describe the solution you'd like
When specifying styleIncludePaths in ng-package.json, there should be an option that resolves and inlines css for each component globally.
Describe alternatives you've considered
Be able to create a global style file for the library.
You can use assets for this, which is covered in https://angular.io/guide/creating-libraries#managing-assets-in-a-library.
However, if you are using UI frameworks it is discouraged to import them directly and instead they should be explicitly imported in the application styles.css to avoid having duplicate CSS which may be the result of duplicate imports to same stylesheet. Example when both a library and the application depends on bootstrap.
|
2025-04-01T06:37:49.715844
| 2017-05-13T08:13:29
|
228457759
|
{
"authors": [
"delasteve",
"marekdano",
"sumitarora"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3444",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/issues/6305"
}
|
gharchive/issue
|
Calling external APIs from angular in .NET Core app returns 404.
Repro steps.
I setup .Net Core project following this tutorial. I added angular cli into the project, built it through command "ng build --prod". The angular app is served as static files from wwwroot folder in the .Net Core app.
I also setup proxy.config server to serve other APIs. When I start command "ng serve --proxy-config proxy.conf.json" the app starts and calling the other APIs within the app works fine. The issue is when I start .Net Core app as "dotnet run", the angular app loads and runs fine, but calling APIs returns 404. Do you know why it is? Do I miss anything?
Hello, we reviewed this issue and determined that it doesn't fall into the bug report or feature request category. This issue tracker is not suitable for support requests, please repost your issue on StackOverflow using tag angular-cli.
If you are wondering why we don't resolve support issues via the issue tracker, please check out this explanation.
Closing as above.
|
2025-04-01T06:37:49.720041
| 2017-07-18T07:44:02
|
243627515
|
{
"authors": [
"Meligy",
"filipesilva",
"smiera"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3445",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/issues/7027"
}
|
gharchive/issue
|
Feature request: onBeforeBuild and onAfterBuild
Bug Report or Feature Request (mark with an x)
- [ ] bug report -> please search issues before submitting
- [x] feature request
Desired functionality.
It will be great, if we can setup some npm run script, when using ng build -w.
For example, in angular-cli.json add optional key build:
"build": { "onAfterBuild": "npm run after.js", "onBeforeBuild": "npm run before.js" }
And when watch detect changes - run defined scripts in angular-cli.json.
The only thing you cannot do with npm scripts today is having hooks into incremental builds (watch stuff). May I ask what's the use case for the watch pre/post hooks please?
Cheers,
We use own internationalization. All translations saved in one file translate.json.
"title": {
"en": "...",
"de": "...",
"it": "...",
"ru": "..."
}
For each country we build different app based on one code, using different enviroment.ts.
After each build, we need to read translate.json file and take translation only for one language and generate one file in dist folder(for example en.json).
Build hooks are something we're thinking of adding for an addon system, but at this moment we don't have any details to share.
|
2025-04-01T06:37:49.736038
| 2017-05-03T09:13:14
|
225924690
|
{
"authors": [
"cexbrayat",
"filipesilva",
"hccampos",
"mgol"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3446",
"repo": "angular/angular-cli",
"url": "https://github.com/angular/angular-cli/pull/6160"
}
|
gharchive/pull-request
|
feat(@angular/cli): improve unit test performance
This PR uses a new Karma plugin to enable vendor bundles in unit tests,
increasing rebuild performance.
On a medium size project rebuilds times were 15x smaller (16.5s to 0.9s).
Fix #5423
Cool stuff! Any benchmarks you could share?
@hccampos I forgot to fill in the PR message with the commit message, it's updated now. TLDR 15x to 30x improvement, probably less for big projects.
@filipesilva 👍 I just tried it, and it does start the test faster on rebuild. But to be honest, this does not completely fix #5423 (at least in my experiment): the progress report still go way slower on rebuild than on initial test:
Chrome 58.0.3029 (Mac OS X 10.12.4): Executed 102 of 102 SUCCESS (5.774 secs / 5.737 secs)
Chrome 58.0.3029 (Mac OS X 10.12.4): Executed 102 of 102 SUCCESS (28.755 secs / 28.73 secs)
Chrome 58.0.3029 (Mac OS X 10.12.4): Executed 102 of 102 SUCCESS (27.963 secs / 27.365 secs)
@cexbrayat bear in mind that we don't display how long building took, only how long karma took to run the tests. I haven't noticed delays while running the test though, so that's a relevant point.
Does this different in karma running time happen all the time, or only on some projects?
@filipesilva Yes I know, I was just pointing out that (I think) #5423 has 2 parts. This PR solved the first one (rebuild took a long time, and it's now faster thanks to your PR 👍 ) but not the second part that @wardbell was pointing out: the progress report is ticking slower on rebuilds.
I'm witnessing this on pretty much all my projects.
@cexbrayat I wen't through a couple of rebuilds on my machine with both a brand new project and an existing medium sized project and couldn't notice any significant deviation on the reported karma re-test speeds (0.183/0.163/0.155 and 19.403/19.595/20.556).
In your case there's a whooping 500% difference though. Are you sure you're not experience chrome background throttling on your tests? I know chrome recently introduced that, and that it's been pretty bad for unit tests.
Good point @filipesilva. @cexbrayat make sure that during your testing the Chrome window is visible all the time, never covered by any other window.
If this was the problem, it will soon stop being an issue - in about a month Chrome 59 will be released with support for headless browsing and Karma already supports that. For now you'd have to be using Chrome Canary or beta.
@filipesilva @mgol wasn't the background throttling disabled in Karma since a while ago? But you're right, it does indeed speed my tests if my browser is not on the background 🤔 So don't mind my comment, I may be experiencing a Chrome/Mac issue (possibly related to https://bugs.chromium.org/p/chromium/issues/detail?id=605498).
@cexbrayat Chrome has added a --disable-background-timer-throttling flag and karma-chrome-launcher uses it but, unfortunately, it doesn't fully unthrottle all the things.
The bug about that has been open for more than a year but, alas, nothing seems to be done on this front.
@mgol I gave a try to the disable-renderer-backgrounding flag and it does appear to solve my issue. Thx you and @filipesilva for putting me on right track 👍
So I opened a bug on karma-chrome-launcher to have this flag included and offered a PR. Let's see how it goes.
Meanwhile, for those experimenting the same issue than me, you can switch your karma.conf.js to:
customLaunchers: {
chromeBackground: {
base: 'Chrome',
flags: ['--disable-renderer-backgrounding']
}
},
browsers: ['chromeBackground'],
@filipesilva sorry for polluting this thread 🙏
@cexbrayat not at all, this is valuable info for people looking through this next time!
I wonder if we should add that renderer flag, but probably it's better to instead add a --headless flag for ng test/e2e when Chrome 59 comes out.
@filipesilva The PR I submitted to karma-chrome-launcher has been merged and a new version released: no need for us to tweak the karma.conf.js, we can only bump the version of the launcher.
I submitted a PR to do so (see #6216 )
@cexbrayat that's pretty cool, kudos!
|
2025-04-01T06:37:49.757146
| 2014-12-10T07:49:14
|
51528333
|
{
"authors": [
"dtritus",
"petebacondarwin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3447",
"repo": "angular/angular.js",
"url": "https://github.com/angular/angular.js/issues/10397"
}
|
gharchive/issue
|
Empty hash at the end of URL causes page reload in HTML5 mode
For example, If I open http://localhost/#, page is reloaded to http://localhost/.
Looks like issue happens here: https://github.com/angular/angular.js/blob/master/src/ng/location.js#L882
When http://localhost/# loaded $location.absUrl() returns http://localhost/ and $browser.url($location.absUrl(), true) called, which causes page reloading.
@dritus - would you like to try to fix this? We need a unit test the demonstrates the issue and then the fix, which is to strip empty hashes from the initialUrl before testing.
I can try to fix this, but right now I don't have time.
|
2025-04-01T06:37:49.762207
| 2014-12-28T18:25:38
|
53006385
|
{
"authors": [
"Narretz",
"jakub-g",
"maksimr",
"phdesign"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3448",
"repo": "angular/angular.js",
"url": "https://github.com/angular/angular.js/issues/10590"
}
|
gharchive/issue
|
minErr: Make optional link on errors.angularjs.org
It's useful when you write tests and output tests' result to terminal.
On small displays query parameters for errors.angularjs.org occupy a large part of display and you have to switch to screen where run tests and scroll to the beginning of the error message.
That doesn't seem unreasonable, although I am not sure how you would tell minErr to not append the whole stack.
Running into this problem myself. As a workaround I'm piping test output through grep
grunt test | grep -v http://errors.angularjs.org
@phdesign I followed your advice, but just realized this is not a very good solution for most usages.
I put this into my package.json to have npm test:
"scripts": {
"test": "karma start --single-run | grep -v errors.angularjs.org",
}
Normally, when you run the test (karma start --single-run) and it fails, karma exits with non-zero exit code.
However when you pipe to grep, the exit code will be 0 even it failed! So suddenly all the tools, like build hooks that relied on npm test to return zero only for good tests (and non zero when there are issues, to immediately stop the build) are broken
Duplicate of #14744 and the the url length will be configurable in the future: https://github.com/angular/angular.js/pull/15707
|
2025-04-01T06:37:49.765250
| 2017-03-06T16:27:44
|
212175141
|
{
"authors": [
"Narretz",
"ajsb85"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3449",
"repo": "angular/angular.js",
"url": "https://github.com/angular/angular.js/issues/15779"
}
|
gharchive/issue
|
$http.head does not return all headers
$http.head does not return all headers
This is probably a CORS problem. AngularJS doesn't modify or filter the headers that come out of the XHR call : http://stackoverflow.com/questions/8945587/xmlhttprequest-getallresponseheaders-not-returning-all-the-headers
For the future, please take note:
Hi, your issue reads like a support question. This github repo is only for bug reports and feature requests. Please try one of the listed Support Channels. Thanks!
|
2025-04-01T06:37:49.771234
| 2017-04-12T04:54:32
|
221151524
|
{
"authors": [
"gkalpak",
"ppham27"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3450",
"repo": "angular/angular.js",
"url": "https://github.com/angular/angular.js/issues/15907"
}
|
gharchive/issue
|
linky filter does not handle certain punctuation well
I'm submitting a ...
[x] bug report
[ ] feature request
[ ] other (Please do not submit support requests here (see above))
Current behavior:
Links like (www.example.com): include ): as part of the link when it is punctuation.
Expected / new behavior:
): should not be part of the link.
Minimal reproduction of the problem with instructions:
Type (www.example.com): in the example https://docs.angularjs.org/api/ngSanitize/filter/linky
Angular version: 1.6.4
Browser: [all]
Anything else:
Closing as duplicate of #7213.
I just realized this something different than #7213. Yet, the main problem is discussed in #7213, which is that linky filter currently disallows certain characters that is should allow (per the spec), but aligning it with the spec would break people's expectations wrt to links in plain text.
There is also #13699, which discusses a more general problem, which is that the RegExp used in linky is pretty broken anyway :grin:
I'll keep this closed. Let's move the discussion on #7213 and/or #13699.
|
2025-04-01T06:37:49.977106
| 2016-08-18T19:22:05
|
171979809
|
{
"authors": [
"lfarroco",
"paulogr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3451",
"repo": "angular/angularfire2",
"url": "https://github.com/angular/angularfire2/issues/450"
}
|
gharchive/issue
|
Auth toPromise() didn't work.
When I subscribe for authentication it works pretty well:
this.authService.af.auth.subscribe(auth => {
console.log(auth); // works as expected
});
bu if I use toPromise(), the callback is never called;
this.authService.af.auth.toPromise().then(auth => {
console.log(auth); // this line is never called;
});
What am I doing wrong?
Thanks!
I placed a breakpoint and it looks like that the method has not been (yet) implemented in this object. If you want to perform a action just once you might try to unsubscribe in the first response:
this.af.auth.subscribe( authData =>{
//do stuff
this.af.auth.unsubscribe();
});
That said, I'm gonna close this issue.
Thank you.
|
2025-04-01T06:37:49.979982
| 2019-05-29T11:27:08
|
449751280
|
{
"authors": [
"anieshvishnu",
"jelbourn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3452",
"repo": "angular/components",
"url": "https://github.com/angular/components/issues/16148"
}
|
gharchive/issue
|
Angular Material snack-bar Direction Type aliases is missing
Documentation Feedback
MatSnackBarConfig propert 'direction: Direction' the type Direction are not explained. It would be help full if it is defined in the Type Aliases like mentioned below.
Direction
Possible values for direction on MatSnackBarConfig.
type Direction = 'ltr' | 'rtl' ;
Affected documentation page: Insert a link to the affected page on material.angular.io
https://material.angular.io/components/snack-bar/api#type_aliases
Duplicate of #18414
|
2025-04-01T06:37:49.982175
| 2021-03-31T13:29:24
|
846670736
|
{
"authors": [
"Maximaximum",
"jelbourn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3453",
"repo": "angular/components",
"url": "https://github.com/angular/components/issues/22377"
}
|
gharchive/issue
|
docs-bug(TYPOGRAPHY): Docs should use the 2018 typography config format
Documentation Feedback
Currently the docs mention typography levels defined by the outdated 2014 Material Design spec. However, the Angular Material scss API seems to support both formats (2014 and 2018). While the 2014 format might still be useful for ongoing legacy projects (and it would be useful to mention this option in the docs), the 2018 format should be the primary format in the docs (even though internally Angular Material might still convert the 2018 configs into 2014 format). The users creating new projects shouldn't be forced to use the outdated format, especially given that the new format is supported anyway, just the docs need to be updated.
Affected documentation page: https://material.angular.io/guide/typography#what-is-typography
Duplicate of #12974
|
2025-04-01T06:37:49.990437
| 2024-12-09T11:18:06
|
2726744120
|
{
"authors": [
"Totati",
"crisbeto",
"janek64"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3454",
"repo": "angular/components",
"url": "https://github.com/angular/components/issues/30150"
}
|
gharchive/issue
|
bug(Timepicker): Integration with Datepicker does not work with FormControl
Is this a regression?
[ ] Yes, this behavior used to work in the previous version
The previous version in which this bug was not present was
No response
Description
When a Timepicker and Datepicker are bound to the same FormControl with a Date type, selecting a new value on either input will cause the data of the other input to be reset/removed. For example, when a date is already selected and then the time is selected, the date portion of the value will be reset to the current date.
The documentation only demonstrates the integration via ngModel. Consequently, it is unclear to me whether it should also work with FormControl. Based on the description, I would have expected it. If it is explicitly not supported, I suggest that a corresponding hint should be added since using Reactive Forms is one of the most common ways for managing the values of inputs.
Reproduction
StackBlitz link: https://stackblitz.com/edit/components-issue-starter-qszwe9ry?file=src%2Fapp%2Fexample%2Fexample.component.ts
Steps to reproduce:
Select a date that is not the current date (e.g tomorrow). The displayed value will correctly display the selected date.
Select any time. The displayed value will display the current date and the selected time instead of the selected date.
Select a date that is not the current date. The displayed date will display 0:00 as the time instead of selected time.
Expected Behavior
Selections in the Datepicker change the date portion of the FormControl, but keep the selected time.
Selections in the Timepicker change the time portion of the FormControl, but keep the selected date.
Actual Behavior
Selections in the Datepicker change the date portion of the FormControl and discard the selected time.
Selections in the Timepicker change the time portion of the FormControl and discard the selected date.
Environment
Angular: 19.0.3
CDK/Material: 19.0.3
Browser(s): Firefox 133, Chrome 131
Operating System (e.g. Windows, macOS, Ubuntu): Windows 11
Duplication of https://github.com/angular/components/issues/30135, https://github.com/angular/angular/issues/58522
Closing as a duplicate.
|
2025-04-01T06:37:50.040013
| 2018-04-26T07:19:44
|
317906032
|
{
"authors": [
"JohnnyRuiGe",
"andrewseguin",
"baihech"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3455",
"repo": "angular/material2",
"url": "https://github.com/angular/material2/issues/11022"
}
|
gharchive/issue
|
About development specification
feature request
Please see Paginator examples,
git address,
(page)="pageEvent = $event" code is not conforming to the development specification,here need a function,not an attribute。
You've changed the API again again again again.
damn! it seem like a plaything. do as one pleases!
Please follow the issue template to help us understand your feature request. It's not clear to me what you are asking for
|
2025-04-01T06:37:50.043971
| 2018-10-25T14:00:27
|
373962770
|
{
"authors": [
"crisbeto",
"jhonToni",
"mightypenguin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3456",
"repo": "angular/material2",
"url": "https://github.com/angular/material2/issues/13811"
}
|
gharchive/issue
|
MATERIAL SIDENAV NOT WORKING
Bug, feature request, or proposal:
I am developing a web app for different devices : Desktop, Tablet, phone.
When I used the sidenav and mediaquery (import {MediaMatcher} from '@ angular / cdk / layout';), calling sidenav.toggle () the sidenav did not open
What is the expected behavior?
What is the current behavior?
What are the steps to reproduce?
Link of the bugged app: Editor Url
What is the use-case or motivation for changing an existing behavior?
Which versions of Angular, Material, OS, TypeScript, browsers are affected?
Is there anything else we should know?
Sidenav works for me so this is more of a tech support issue.
I suggest joining the Gitter.im community for material 2 and asking for help there.
Please keep GitHub issues for bug reports / feature requests. Better avenues for troubleshooting / questions are stack overflow, gitter, mailing list, etc.
I have posted the question on the stackoverflow web site.
Stackoverflow question
|
2025-04-01T06:37:50.052100
| 2018-12-28T18:35:52
|
394695481
|
{
"authors": [
"AKlaus",
"DanielP42",
"crisbeto",
"geyuqiu",
"houh60",
"izerozlu",
"philmayfield"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3457",
"repo": "angular/material2",
"url": "https://github.com/angular/material2/issues/14668"
}
|
gharchive/issue
|
[mat-sort] causing warning when running unit tests: WARN: 'The "longpress" event cannot be bound because Hammer.JS is not loaded and no custom loader has been specified.'
What is the expected behavior?
Expect to not see a warning running a test
What is the current behavior?
Seeing a warning running a unit test
What are the steps to reproduce?
I cant seem to figuire out how to run a Jasmine test with StackBlitz, but I copied the example from the sorting section, and created the this unit test.
Which does pass, but throws the warning:
WARN: 'The "longpress" event cannot be bound because Hammer.JS is not loaded and no custom loader has been specified.'
Which versions of Angular, Material, OS, TypeScript, browsers are affected?
Angular 7.1.4 (down to 7.0.0)
Material 7.2.0
Jasmine 3.3.0
Karma 3.1.4
Is there anything else we should know?
It's because the sort header is using some gestures from Hammer.js. You should be able to work around it by providing a dummy Hammer loader:
providers: [{
provide: HAMMER_LOADER,
useValue: () => new Promise()
}]
That indeed worked for me, thank you very much @crisbeto! Closing this issue.
If you use jest for testing you can import 'hammerjs' in your jest.ts file.
Adding import 'hammerjs'; into the test.ts solves the problem.
The one which appears in angular.json under:
"test": {
"builder": "@angular-devkit/build-angular:karma",
"options": {
"main": "src/test.ts",
...
Adding HammerJs to the scripts section of angular.json (as @philmayfield suggested above) didn't help.
@crisbeto adding the code in app.module.ts unfortunately did not solve the issue for me :(
Adding import 'hammerjs'; into the test.ts solves the problem.
The one which appears in angular.json under:
"test": {
"builder": "@angular-devkit/build-angular:karma",
"options": {
"main": "src/test.ts",
...
Adding HammerJs to the scripts section of angular.json (as @philmayfield suggested above) didn't help.
That precise suggestion ( from @philmayfield ) resolved my problem. That's weird.
I think you have to try both ways: if it doesn't work when you import 'hammerjs' in test.ts, then try to to import import 'hammerjs' in app.module.ts.
The first way didn't work for me, but the second did.
|
2025-04-01T06:37:50.068038
| 2016-07-16T19:34:32
|
165944167
|
{
"authors": [
"PBM42",
"agungsb",
"mgechev",
"sathishvj",
"zoechi"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3458",
"repo": "angular/mobile-toolkit",
"url": "https://github.com/angular/mobile-toolkit/issues/87"
}
|
gharchive/issue
|
'No provider for RouterOutletMap!' error on adding "@angular/router": "3.0.0-alpha.8"
I had some working code that showed up a UI correctly. But I had all my code in the main app.component.ts (including the html with member template).
Then I included routing (@angular/router": "3.0.0-alpha.8). I did pretty much the same code as in another app with routing. However, I get the error: 'No provider for RouterOutletMap!'.
On searching I found out another person had a misconfigured webpack. I'm using system.js and mine seems to be ok.
My angular2 version is: 2.0.0-rc.3
I used ng-cli to generate both the --mobile app and the components.
My main.ts looks like this:
`
import { bootstrap } from '@angular/platform-browser-dynamic';
import { enableProdMode } from '@angular/core';
import { AppComponent, environment } from './app/';
import { APP_SHELL_RUNTIME_PROVIDERS } from '@angular/app-shell';
import { APP_ROUTER_PROVIDERS } from './app/app.routes';
if (environment.production) {
enableProdMode();
}
bootstrap(AppComponent, [
APP_SHELL_RUNTIME_PROVIDERS,
APP_ROUTER_PROVIDERS
]);
`
My app.component.ts is:
`
import { Component } from '@angular/core';
import { APP_SHELL_DIRECTIVES } from '@angular/app-shell';
import { Router, ROUTER_DIRECTIVES } from '@angular/router';
@Component({
moduleId: module.id,
selector: 'app-root',
template: <router-outlet></router-outlet>,
styles: [],
directives: [APP_SHELL_DIRECTIVES, ROUTER_DIRECTIVES]
})
export class AppComponent {
}
`
My app.routes.ts is:
`
import { provideRouter, RouterConfig } from '@angular/router';
import { TopicsRoutes } from './topics/topics.routes';
export const routes: RouterConfig = [
...TopicsRoutes
];
export const APP_ROUTER_PROVIDERS = [
provideRouter(routes)
];
`
My TopicsRoutes in topics.routes.ts is:
`
import { RouterConfig } from '@angular/router';
import { TopicsComponent } from './topics.component';
export const TopicsRoutes: RouterConfig = [
{ path: '', component: TopicsComponent },
{ path: 'topics', component: TopicsComponent }
];
`
Any reason you don't use @angular/router": "3.0.0-beta.2"?
Triple-backticks would format your code properly. You can edit your post and fix it to make it more readable.
Updated with better formatting. The alpha.8 version was what came by default when I did a ng new project-name --mobile. So I assumed it had some dependency. Let me try with beta.2 then.
But other than that, I'm also seeing this error now for the app:
EXCEPTION: Can't resolve all parameters for RouterOutlet: (RouterOutletMap, ViewContainerRef, ?, name).
p.s. I also had to update the default package.json: typings entry to 1.3.1 to do an npm install properly.
There were a couple of instructions here: StackOverflow - Can't resolve all parameters for RouterOutlet. But none of them worked either.
I had the same issue, try with
<router-outlet *shellNoRender></router-outlet>
It will render the routes only when the app is fully loaded.
That doesn't work either. :-(
@sathishvj since the --mobile flag enables server-side rendering with Universal, you need to include the router providers in main-app-shell.ts as well. Make sure that your server-side bootstrap configuration looks something like:
import { provide } from '@angular/core';
import { APP_BASE_HREF } from '@angular/common';
import { APP_SHELL_BUILD_PROVIDERS } from '@angular/app-shell';
import { APP_ROUTER_PROVIDERS } from './app/app.routes';
import { AppComponent } from './app/';
import {
REQUEST_URL,
ORIGIN_URL,
NODE_LOCATION_PROVIDERS,
} from 'angular2-universal';
export const options = {
directives: [
// The component that will become the main App Shell
AppComponent
],
platformProviders: [
APP_SHELL_BUILD_PROVIDERS,
provide(ORIGIN_URL, {
useValue: ''
})
],
providers: [
// What URL should Angular be treating the app as if navigating
APP_ROUTER_PROVIDERS,
NODE_LOCATION_PROVIDERS,
provide(APP_BASE_HREF, {useValue: '/'}),
provide(REQUEST_URL, {useValue: '/'})
],
async: true,
preboot: false
};
That didn't work either. I started a new project and tried and still having issues. Also has this issue (https://github.com/angular/angular-cli/issues/1174) still. I'm going to try this again after a while.
@sathishvj what error do you get after you add the APP_ROUTER_PROVIDERS to the Universal config?
@mgechev I'm facing the similar situation. Your workaround did work, but it prevents the application from being rendered by the server. If I change the async property to false then I got the following error:
Error: Command failed: node /Applications/XAMPP/xamppfiles/htdocs/hello-mobile/node_modules/angular2-broccoli-prerender/dist/child_proc.js --sourceHtml=/Applications/XAMPP/xamppfiles/htdocs/hello-mobile/tmp/app_shell_plugin-input_base_path-jDgluh3c.tmp/0/index.html --optionsPath=/Applications/XAMPP/xamppfiles/htdocs/hello-mobile/tmp/app_shell_plugin-input_base_path-jDgluh3c.tmp/0/main-app-shell --outputIndexPath=/Applications/XAMPP/xamppfiles/htdocs/hello-mobile/tmp/app_shell_plugin-output_path-ZLcnh1Fq.tmp/index.html
DEPRECATION WARNING: `document` is no longer supported and will be removed in next release. Please use `template`
DEPRECATION WARNING: `Bootloader#serializeApplication` arguments has changed.Please use an `AppConfig` interface {providers: Array<any>, directives: Array<any>, template?: string}
TypeError: Cannot read property 'unsubscribe' of undefined
It seems that there's a problem between using templateUrl and template. Any thoughts?
|
2025-04-01T06:37:50.077630
| 2015-06-07T04:33:47
|
85854976
|
{
"authors": [
"mi6crazyheart"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3459",
"repo": "angular/router",
"url": "https://github.com/angular/router/issues/346"
}
|
gharchive/issue
|
Cannot read property 'path' of undefined
Where i'm doing wrong. My stackoverflow link : http://stackoverflow.com/questions/30689909/angular-new-router-error-cannot-read-property-path-of-undefined
Issue fixed. It was because of old 'router.es5.js' file.
|
2025-04-01T06:37:50.093294
| 2023-11-22T19:46:15
|
2007018973
|
{
"authors": [
"animagine"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3460",
"repo": "animagine/prework-study-guide",
"url": "https://github.com/animagine/prework-study-guide/issues/2"
}
|
gharchive/issue
|
CSS
CSS
User story
As a boot camp student
I want the prework notes to be structured on a webpage
So that I can easily find and read the information
Acceptance Criteria
GIVEN a Prework Study Guide website
WHEN I view the study guide
THEN I see a dark blue header and footer, and four boxes with a shadow
Decided that a dark blue header and footer wasn't on brand and as a result we went with a less saturated Yellow.
|
2025-04-01T06:37:50.138788
| 2024-01-17T21:01:27
|
2086973711
|
{
"authors": [
"Alex178190282",
"Sadwhy",
"Secozzi",
"abdallahmehiz",
"jmir1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3461",
"repo": "aniyomiorg/aniyomi",
"url": "https://github.com/aniyomiorg/aniyomi/issues/1314"
}
|
gharchive/issue
|
Plus, if I want a manga extension for a specific site, that repository must have that extension. So there's no freedom at all, what's the point?
Other details
please and thank you
Acknowledgements
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
[X] I have written a short but informative title.
[X] If this is an issue with an official extension, I should be opening an issue in the extensions repository.
[X] I have updated the app to version <IP_ADDRESS>.
[X] I will fill out all of the requested information in this form.
Is this what y'all have to deal with daily? @Secozzi
Unfortunately
lol didn't even know this exists
@Alex178190282 manga extensions are beyond our control, we have nothing to do with this and there is nothing we can do about it. Also, converting any URL to an extension isn't technically possible, each extension needs to be hand-crafted for each site.
Btw @Alex178190282 please don't be rude and don't open any more issues with bad titles like "important suggestion".
It really doesn't help anyone and I'll have to block you from opening issues if you don't stop this...
Plus, if I want a manga extension for a specific site, that repository must have that extension. So there's no freedom at all, what's the point?
Other details
please and thank you
Acknowledgements
[x] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
[x] I have written a short but informative title.
[x] If this is an issue with an official extension, I should be opening an issue in the extensions repository.
[x] I have updated the app to version <IP_ADDRESS>.
[x] I will fill out all of the requested information in this form.
It sounds like you're frustrated with the recent changes in Aniyomi regarding manga extensions and repositories. Here's a breakdown of your concerns and some possible steps to address them:
Understanding the Issue
Removed Manga from the App: It seems that Aniyomi has removed the built-in manga sources from the app.
External Repositories: Now, you need to use external repositories to get manga extensions.
Specific Site Extensions: The repositories must contain the specific manga site extensions you need.
Possible Solutions
Find and Add Repositories:
Look for trusted repositories that host the manga extensions you need. Some popular repositories might already include a wide range of extensions.
To add a repository, go to Aniyomi settings, find the "Extensions" section, and add the URL of the repository.
Creating Custom Extensions:
If the specific manga site you want isn't available in existing repositories, you might consider creating a custom extension. This requires some technical knowledge of how Aniyomi extensions work.
Aniyomi extensions are generally based on the Tachiyomi extension framework. You can refer to Tachiyomi's developer documentation to learn how to create and manage extensions.
Community Support:
Engage with the Aniyomi community. There might be other users who have faced similar issues and can offer solutions.
Join forums, Discord channels, or Reddit threads related to Aniyomi or Tachiyomi to get help from experienced users and developers.
Steps to Add a Repository in Aniyomi
Open Aniyomi: Launch the app on your device.
Go to Settings: Navigate to the settings menu.
Extensions: Find the section related to extensions.
Add Repository: Input the URL of the repository you want to add.
Resources for Help
Aniyomi GitHub: Check the official Aniyomi GitHub page for updates and documentation.
Aniyomi Discord: Join the Aniyomi Discord server for real-time help from the community.
Reddit: Visit subreddits like r/Tachiyomi or r/Aniyomi for community support and advice.
By following these steps and utilizing community resources, you should be able to find or create the manga extensions you need for Aniyomi. If you have any specific URLs or more detailed requirements, feel free to share, and I can guide you further on how to integrate them into Aniyomi.
ChatGPT ahhh reply
Reddit: Visit subreddits like r/Tachiyomi or r/Aniyomi for community support and advice.
We don't have an official subbredit
|
2025-04-01T06:37:50.160278
| 2024-08-20T19:16:34
|
2476365898
|
{
"authors": [
"ankane",
"gjtorikian",
"jkostolansky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3462",
"repo": "ankane/neighbor",
"url": "https://github.com/ankane/neighbor/issues/23"
}
|
gharchive/issue
|
TypeError: can't quote Array (occasionally)
I'm not quite sure what's causing it, but sometimes, neighbor throws the following error:
[123, 132] in ~/.rbenv/versions/3.3.4/lib/ruby/gems/3.3.0/gems/neighbor-0.4.0/lib/neighbor/model.rb
123| column_attribute = klass.type_for_attribute(attribute_name)
124| vector = column_attribute.cast(vector)
125| Neighbor::Utils.validate(vector, dimensions: dimensions, column_info: column_info)
126| vector = Neighbor::Utils.normalize(vector, column_info: column_info) if normalize
127|
=> 128| query = connection.quote(column_attribute.serialize(vector))
129| order = "#{quoted_attribute} #{operator} #{query}"
130| if operator == "#"
131| order = "bit_count(#{order})"
132| end
=>#0 block {|attribute_name=:embedding_1536, vector=[0.011979911, -0.03421443, 8.979617e-05,..., options={:dimensions=>nil, :normalize=>nil}|} in has_neighbors (2 levels) at ~/.rbenv/versions/3.3.4/lib/ruby/gems/3.3.0/gems/neighbor-0.4.0/lib/neighbor/model.rb:128
#1 [C] BasicObject#instance_exec at ~/.rbenv/versions/3.3.4/lib/ruby/gems/3.3.0/gems/activerecord-7.2.0/lib/active_record/relation.rb:548
# and 46 frames (use `bt' command for all frames)
(ruby@bin/rails#63657) connection.quote(column_attribute.serialize(vector))
eval error: can't quote Array
/Users/gjtorikian/.rbenv/versions/3.3.4/lib/ruby/gems/3.3.0/gems/activerecord-7.2.0/lib/active_record/connection_adapters/abstract/quoting.rb:87:in `quote'
/Users/gjtorikian/.rbenv/versions/3.3.4/lib/ruby/gems/3.3.0/gems/activerecord-7.2.0/lib/active_record/connection_adapters/postgresql/quoting.rb:122:in `quote'
(rdbg)//Users/gjtorikian/.rbenv/versions/3.3.4/lib/ruby/gems/3.3.0/gems/neighbor-0.4.0/lib/neighbor/model.rb:1:in `block (2 levels) in has_neighbors'
This is despite the fact that none of the arguments in the nearest_neighbors scope change.
Ah. Sometimes column_attribute is
#<Neighbor::Type::Vector:0x000000014de95d38 @precision=nil, @scale=nil, @limit=1536>
but other times, it's:
#<ActiveModel::Type::Value:0x000000014f97d588 @limit=nil, @precision=nil, @scale=nil>
Manually setting the cast_type works: attribute :embedding_1536, Neighbor::Type::Vector.new.
Hi @gjtorikian, I'm not sure how to reproduce the issue (one guess is another gem could be interfering with it). If you can create a minimal reproducible script, happy to look into it more.
require "bundler/inline"
gemfile do
source "https://rubygems.org"
gem "activerecord", require: "active_record"
gem "neighbor", github: "ankane/neighbor"
gem "pg"
end
ActiveRecord::Base.establish_connection adapter: "postgresql", database: "neighbor_repro"
ActiveRecord::Base.logger = ActiveSupport::Logger.new(STDOUT)
ActiveRecord::Schema.define do
enable_extension "vector"
create_table :items, force: :cascade do |t|
t.column :embedding, :vector, limit: 3
end
end
class Item < ActiveRecord::Base
has_neighbors :embedding
end
Item.create!(embedding: [1,2,3])
p Item.nearest_neighbors(:embedding, [0.9, 1.3, 1.1], distance: "euclidean").first(5)
Maybe similar issue here. After upgrading to rails 7.2, the tests works on my Mac, but fail in GitHub Actions with:
TypeError: can't cast Array
Setting the type manually fixes it:
attribute :embedding, Neighbor::Type::Vector.new # <-- Fix
has_neighbors :embedding, dimensions: 3072
Seems like it may be something with Rails 7.2, possible related to https://github.com/rails/rails/issues/52607. Does changing config.eager_load or enabling/disabling parallel tests fix it?
Disabling parallelization in tests also seems to work.
Was able to reproduce with parallel tests and config.eager_load = true, but still trying to figure out the cause.
A temporary fix is to call reset_column_information in parallelize_setup.
class ActiveSupport::TestCase
parallelize_setup do |worker|
Item.reset_column_information
end
end
This is fixed by https://github.com/rails/rails/pull/52703.
|
2025-04-01T06:37:50.210461
| 2023-07-12T22:32:38
|
1801852070
|
{
"authors": [
"ShreyaDhiman24",
"anmode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3463",
"repo": "anmode/grabtern-frontend",
"url": "https://github.com/anmode/grabtern-frontend/pull/506"
}
|
gharchive/pull-request
|
Homepage
closes #493
Wow Shreya. It's cool
Wow Shreya. It's cool
I guess you forget to label the level on it..
|
2025-04-01T06:37:50.283638
| 2023-11-09T04:46:01
|
1984812795
|
{
"authors": [
"bengtlofgren",
"kimkhoathao"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3465",
"repo": "anoma/namada-testnets",
"url": "https://github.com/anoma/namada-testnets/pull/2395"
}
|
gharchive/pull-request
|
Create validator.kimkhoathao.toml
Description
All previous genesis validators should name their PRs "Update {validator_alias}.toml for tesntet-15" and provide links to previous PRs merged.
If this is an UPDATE for a previous genesis validator
Checks are made against your net_address. If this has changed since the previous testnet, make sure you provide links of previous prs merged from your previous git username.
Thanks in advance!
Checklist before merging
[x] Only one toml is added in this PR
[x] The file being added is indeed a .toml file
[x] The toml being added is to the correct folder, and only to the correct folder
[x] The eth_hot_key and eth_cold_key are present
[x] The email, discord, elements telegram, and twitter fields are present and valid
Filename contains more than one dot (in addition to .toml)
This PR has been automatically closed because it does not follow the correct procedure for updating the genesis file.
|
2025-04-01T06:37:50.367349
| 2022-07-09T22:05:54
|
1299775893
|
{
"authors": [
"TheMasterFX",
"quietjoy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3466",
"repo": "ansible-community/ansible-bender",
"url": "https://github.com/ansible-community/ansible-bender/issues/275"
}
|
gharchive/issue
|
No libselinux package found when using Python 3.8 (needed by Ansible Core 2.12)
I'm using AlmaLinux 8.6 and ansible-bender 0.9.0. If I install ansible it needs python 3.8 since it provides ansible-core-2.12.2. But there is no python38-libsselinux package for python 3.8, an according to this bugreport there never will be? ( https://bugzilla.redhat.com/show_bug.cgi?id=1932650 )
However I was not able to run ansible-bender since it is checking for selinux (_selinux) available
There was an error during execution:
This system is using selinux(8) and selinux python module is not installed. There is a known issue in ansible that it freezes in this setup:
https://bugzilla.redhat.com/show_bug.cgi?id=1696706
Please install libselinux python bindings (on Fedora the package name is python3-libselinux).
So I just modified the function _check_selinux_iz_gud to just return. It seems to work. However I dont't know if it was just luck due to my minimalistic playbook.
I ran into this as well on fedora. My workaround was installing the selinux shim via pip/pypi in my virtualenv.
@TomasTomecek - in the bug report linked to in the error message shown, you advocated closing the bug report because, Screw these heisenbugs: I can't reproduce the freeze anymore: tried a clean VM and my dirty workstation and the freeze happened on neither. :)
I've opened #285 to address this.
|
2025-04-01T06:37:50.381951
| 2022-07-18T14:27:20
|
1308046479
|
{
"authors": [
"adityamulik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3467",
"repo": "ansible/ansible-collection-memsource",
"url": "https://github.com/ansible/ansible-collection-memsource/issues/30"
}
|
gharchive/issue
|
Memsource Credentials issue
Memsource credentials passed from variable files or externally from playbook (using command line or extra_vars file) is not working.
The issue has been fixed as per 6b3b7fee7b9702893f083c1f722ea5729bd8fc78
|
2025-04-01T06:37:50.437799
| 2022-03-01T22:37:51
|
1155894099
|
{
"authors": [
"cidrblock",
"ganeshrn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3468",
"repo": "ansible/ansible-navigator",
"url": "https://github.com/ansible/ansible-navigator/pull/1029"
}
|
gharchive/pull-request
|
Move pull_policy to pull.policy
This moves pull policy under the pull key in the settings file.
This will result in the next release being a major as it is a breaking change.
Fixes #951
ansible-language-server followed the similar for pull-policy as that of ansible-navigator.
https://github.com/ansible/ansible-language-server#language-server-settings
Should we do a similar change in ALS as well? It will require a major version change for both ALS and extension.
ansible-language-server followed the similar for pull-policy as that of ansible-navigator. https://github.com/ansible/ansible-language-server#language-server-settings Should we do a similar change in ALS as well? It will require a major version change for both ALS and extension.
I'd say if we were going to do it, we should do it fast before there is broad adoption of using EEs with both. I'm guessing it is still low.
Maybe we could time it with a navigator release and the lint 6 release?
|
2025-04-01T06:37:51.247733
| 2021-05-19T22:34:12
|
896011562
|
{
"authors": [
"pabelanger"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3469",
"repo": "ansible/python-builder-image",
"url": "https://github.com/ansible/python-builder-image/pull/40"
}
|
gharchive/pull-request
|
Add PKGMGR_OPTS support
This allows users to pass flags to our package manager.
Signed-off-by: Paul Belanger<EMAIL_ADDRESS>
Pull-request updated, HEAD is now aba3503d54f363240ff93ecaa8720f8f3472a113
|
2025-04-01T06:37:51.256163
| 2023-08-30T08:21:27
|
1873178854
|
{
"authors": [
"AionKwong",
"carlqt",
"fenbox",
"zhenruyan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3470",
"repo": "answerdev/answer",
"url": "https://github.com/answerdev/answer/issues/502"
}
|
gharchive/issue
|
Images in the private mode can still be accessed when not logged in
私有状态下图片仍然可以在非登录状态下访问。
设置成必须登录才能查看后,图片仍然可以直接访问。
这里带来一个问题,如果图片存在第三方 OSS,该如何验证权限呢?
Here is a question: how to verify the permission of images if they are stored in OSS (Object Storage Service)?
参考minio和七牛云又拍云等的方案,通常是向oss服务申请一个key ,再通过303重定向到真实的带着key的资源链接实现的。
但这会带来一系列的问题,比如问题内容发布修改时的逻辑。 以及图片资源需要归档到数据库等。
具体参考文档 https://help.upyun.com/knowledge-base/cdn-token-limite/
(通常客户端会这么干 ,不知道老哥有没有什么骚操作)
Referring to solutions such as Minio and Qiniuyun Youpaiyun, it is usually achieved by applying for a key from the OSS service and redirecting it to a real resource link with the key through 303.
But this will bring a series of problems, such as the logic when publishing and modifying the problem content. And image resources need to be archived in the database.
Specific reference documents https://help.upyun.com/knowledge-base/cdn-token-limite/
This sounds like an interesting problem. I'd like to help out but since I'm very new and unfamiliar with the App, can someone provide a "steps to replicate" instruction so I can get started quickly?
Thanks @LinkinStars , I'm able to run the app in my local but was wondering if there's documentation/steps on how to replicate this exact scenario or the url/route of this specific feature?
If not, I will just explore and see if I can find it 👌
Got it. I think I will just look for other issues that I can help out then.
I'd like to help with the discussion but I'm not sure exactly what the bug is. From my understanding, the reporter has set their uploaded image to "private" but it's still viewable for unauthenticated users.
现在有一个新问题。当设置为私有后,登录用户后,用户上传的图片无法预览。
对不起,让我确认一下。因此,请清除浏览器缓存,注销,然后重新登录并刷新页面。
这个问题还是存在,是否和安装的版本有关? 我这边是使用最新的docker镜像进行安装。
是直接使用ip访问的。
1Panel-answer-h3Zj-20231120163702.log
Fixed in v1.2.0
|
2025-04-01T06:37:51.265934
| 2024-04-12T12:54:35
|
2240099820
|
{
"authors": [
"RobPasMue",
"abhishekchitwar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3471",
"repo": "ansys/pyansys-geometry",
"url": "https://github.com/ansys/pyansys-geometry/issues/1110"
}
|
gharchive/issue
|
Geometry Visualization Issue
🔍 Before submitting the issue
[X] I have searched among the existing issues
[X] I am using a Python virtual environment
🐞 Description of the bug
Couldn't able to visualize the geometry, standalone trame, PyVista trame or provided TrameVisualizer Class,
from ansys.geometry.core.plotting.trame_gui import TrameVisualizer
view = TrameVisualizer()
view.set_scene(plotter=plotter)
view.show()
CRITICAL - - logger - handle_exception - Uncaught exception
Traceback (most recent call last):
File "d:\Research\pygeom_plotter_test\geom_plotter.py", line 86, in <module>
view.set_scene(plotter=plotter)
File "C:\Users\username\.ansys_python_venvs\.venv_pyansys\lib\site-packages\ansys\geometry\core\plotting\trame_gui.py", line 64, in set_scene
view = plotter_ui(plotter.scene)
AttributeError: 'PlotterHelper' object has no attribute 'scene'
📝 Steps to reproduce
Pass the plotter object to respective implementation, e.g.,
TrameVisualizer.set_scene(plotter)
💻 Which operating system are you using?
Windows
📀 Which ANSYS version are you using?
Ansys 24R1
🐍 Which Python version are you using?
3.12
📦 Installed packages
Installed all the pyansys meta-pacakages using Ansys Python Manager.
Additionally install following packages.
trame trame-vuetify trame-vtk
Hi @abhishekchitwar - the way to use the trame visualizer is not this one.
Please do as follows
import ansys.geometry.core as pygeo
pygeo.USE_TRAME = True
# Do your stuff...
#
#
design.plot()
Our PyAnsys Geometry main objects (design, component, body) all have a "plot()" method you can use easily this way with your intended backend
Have a look at https://geometry.docs.pyansys.com/version/stable/examples/01_getting_started/05_plotter_picker.html as well
Hi @RobPasMue
I attempted the following code snippet, but the design plot still appears in a desktop window. I'm looking for a way to display this design on a server or local URL instead.
from ansys.geometry.core import launch_modeler
from ansys.geometry.core.misc import UNITS, Distance, DEFAULT_UNITS
from ansys.geometry.core.math import Point2D
from ansys.geometry.core.sketch import Sketch
from ansys.geometry.core.plotting import PlotterHelper
import pyvista as pv
#from trame_app import TrameApp
DEFAULT_UNITS.LENGTH = UNITS.mm
modeler = launch_modeler(mode='spaceclaim')
launch_modeler.USE_TRAME=True
base_sketch = Sketch()
base_sketch.segment(Point2D([0,0]), Point2D([0,10]))
base_sketch.segment(Point2D([0,10]), Point2D([10,10]))
base_sketch.segment(Point2D([10,10]), Point2D([10,0]))
base_sketch.segment(Point2D([10,0]), Point2D([0,0]))
# Start by creating the Design
design = modeler.create_design("ModelingDemo")
# Create a body directly on the design by extruding the sketch
body = design.extrude_sketch(
name="Design_Body", sketch=base_sketch, distance=Distance(10, unit=UNITS.mm)
)
# design plot
design.plot()
# plotter, this is for test
# plot_list = [body]
# plotter = PlotterHelper(use_trame=True)
# plotter.plot(plot_list)
Hi @RobPasMue,
I just updated the argument with design.plot(use_trame= True), getting following issue.
raise TypeError(
TypeError: Server using client_type='vue3' while we expect 'vue2'
It seems, it requires update from TrameVisualizer.
Hi @RobPasMue,
I just updated the self.server = get_server(client_type='vue2') with TrameVisualizer, it worked as updated. Thanks!
Thanks!
Oh interesting! So you needed to pass the client type to the TrameVisualizer.. We might have to expose this somehow. Could you share your final script? I guess you had to modify your local version of PyAnsys Geometry, right?
Hi @RobPasMue,
yeah, I modified the local pygeometry installation.
It would be great if TrameVisualizer exposes several methods in context of remote visualization, e.g.
def get_url(self):
return f"http://{self.address}:{self.port}"
Ease for developer to configure address and port as per requirement.
Here is the final code! Example test case developed by @agvarghe
from ansys.geometry.core import launch_modeler
from ansys.geometry.core.misc import UNITS, Distance, DEFAULT_UNITS
from ansys.geometry.core.math import Point2D
from ansys.geometry.core.sketch import Sketch
from ansys.geometry.core.plotting import PlotterHelper
import pyvista as pv
DEFAULT_UNITS.LENGTH = UNITS.mm
modeler = launch_modeler(mode='spaceclaim')
# launch_modeler.USE_TRAME=True
base_sketch = Sketch()
base_sketch.segment(Point2D([0,0]), Point2D([0,10]))
base_sketch.segment(Point2D([0,10]), Point2D([10,10]))
base_sketch.segment(Point2D([10,10]), Point2D([10,0]))
base_sketch.segment(Point2D([10,0]), Point2D([0,0]))
# Start by creating the Design
design = modeler.create_design("ModelingDemo")
# Create a body directly on the design by extruding the sketch
body = design.extrude_sketch(
name="Design_Body", sketch=base_sketch, distance=Distance(10, unit=UNITS.mm)
)
# design plot
design.plot(use_trame=True)
Thanks!
We will delegate the actual remote visualization to a different project we are working on. On this one we will just fix the inputs needed to use vue3 or vue2 =)
This issue will be closed by #1148 - thanks @AlejandroFernandezLuces
|
2025-04-01T06:37:51.270854
| 2024-01-24T18:14:39
|
2098829075
|
{
"authors": [
"germa89"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3472",
"repo": "ansys/pymapdl",
"url": "https://github.com/ansys/pymapdl/issues/2703"
}
|
gharchive/issue
|
Implement a function which helps debugging
Somehting like:
debug_pymapdl checks
It should use the CLI.
It should check the MAPDL starting point...
I think we should just use the report? And append things there...
I think the whole CLI could be revamped:
Launcher
Replace launch_mapdl https://mapdl.docs.pyansys.com/version/dev/user_guide/cli.html#pymapdl-command-line-interface with
pymapdl launch
pymapdl stop
converter
https://mapdl.docs.pyansys.com/version/dev/user_guide/convert.html
pymapdl convert
for debugging
we could have something like:
pymapdl debug should print the report?
Help
pymapdl help open the website for help and/or print useful information to the stdout.
|
2025-04-01T06:37:51.389392
| 2023-01-25T06:44:40
|
1556135686
|
{
"authors": [
"ForNeVeR",
"rstolpe"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3473",
"repo": "ant-druha/intellij-powershell",
"url": "https://github.com/ant-druha/intellij-powershell/issues/98"
}
|
gharchive/issue
|
Write inside <# #> will cause extra #>
When your writing comments inside <# #> and pressing enter it will add a extra #> by it self.
Duplicate of #35.
|
2025-04-01T06:37:51.400516
| 2023-02-26T15:21:54
|
1600095129
|
{
"authors": [
"MaGaroo",
"XiaoYang943",
"biancangming",
"callmesoul",
"drbarto",
"nonzzz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3474",
"repo": "antfu/vite-plugin-md",
"url": "https://github.com/antfu/vite-plugin-md/issues/133"
}
|
gharchive/issue
|
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]
Describe the bug
failed to load config from ...\vite.config.ts
error when starting dev server:
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: No "exports" main defined in<EMAIL_ADDRESS>help!thanks!
Reproduction
npm
System Info
system
Used Package Manager
npm
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guide.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A, please open a GitHub Discussion instead.
[X] The provided reproduction is a minimal reproducible of the bug.
我在我的package.json文件当中加入"type": "module",项目正常启动了,可能依赖包抛弃了以前的模式。
我用的是node 16 的版本
npm 8+
谢谢!
Just in case you couldn't understand our Chinese friend, here's a translation of his message:
I added "type": "module" to my package.json file, and the project started normally, maybe the dependent package abandoned the previous mode.
I am using the node 16 version
Having the same problem in a brand-new Vite/Vue project.
Any plans to fix this? Seems to make the plugin unusable with the default Vite setup. Adding "type": "module" to package.json doesn't work for me (and even if it did, I wouldn't see this as a long-term solution).
Another way.happy-wrapper is a pue esm package. So we can use dynamic import load this plugin.
Here is example:
import { defineConfig } from 'vite';
export default (async () => {
const Markdown = (await import('vite-plugin-md')).default;
return defineConfig({
plugins:[others plugin,Markdown(...yourOption)]
})
})();
我vite-plugin-md版本降到0.10.0版本就可以了
Translation:
My vite-plugin-md version just needs to be reduced to version 0.10.0.
I really feel an obligation to do this.
|
2025-04-01T06:37:51.403084
| 2024-01-04T01:35:48
|
2064861046
|
{
"authors": [
"Unfield"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3475",
"repo": "anthdm/ffaas",
"url": "https://github.com/anthdm/ffaas/pull/11"
}
|
gharchive/pull-request
|
added CORS to the API Server and WASM Server
Added CORS Support for WASM and API Server
This PR introduces Cross-Origin Resource Sharing (CORS) support for both the WebAssembly (WASM) module and the API server.
Changes:
Added CORS middleware to the API server: The server.go file now includes a middleware function that sets appropriate CORS headers in the HTTP response. This allows client-side web applications to interact with the API server.
Added CORS middleware to the WASM server: The wasm.go file now includes a middleware function that sets appropriate CORS headers in the HTTP response. This allows client-side web applications to interact with the WASM server.
Sorry for all the PRs. I'm relatively new to projects where multiple people make changes. But this should be the last one... at least for this function.
|
2025-04-01T06:37:51.419851
| 2016-11-21T07:17:37
|
190637064
|
{
"authors": [
"alex-birch",
"antingshen",
"clayshieh",
"hocherie",
"uzbhutta"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3476",
"repo": "antingshen/BeaverDam",
"url": "https://github.com/antingshen/BeaverDam/issues/84"
}
|
gharchive/issue
|
Connecting to MechanicalTurk
Hi,
How do we connect this to mechanical turk? Is there some missing documentation?
Here's what we've done so far:
Created an AWS account
Created an AWS user with full turk access only
Created environment variables with the AWS_ID and AWS_KEY set
Created a fullvideotask in the admin site: /admin/mturk/fullvideotask/
I can see that the Task class in mturk/models.py actually calls publish and creates the HIT with AWS, but I can't see where the publish method is invoked.
Can you please provide some more instuctions?
Thanks for your help.
As of right now the task of publishing HIT's is done manually via Django shell. To run django shell, run python manage.py shell, then import fullvideotask(fvt), query the fvt model that you want, and run its publish method. Similarly you can also use the class function, bath_create_and_publish() to create multiple task objects and publish them all in one go. Admin dashboard HIT status/management functionality is in the works but feel free to build one!
Thanks for this - that worked. For those people who aren't that familiar with Django shell (like me) you'll need to do this:
python manage.py shell < script.py
where script.py is a script something along the lines of:
`from mturk.models import FullVideoTask
tasks = [x for x in FullVideoTask.objects.all() if x.hit_id != ""]
for task in tasks :
task.publish()
`
@alex-birch take a look at the new branch, mturk. Its a dashboard like functionality for users to publish tasks as HITs
Hi @clayshieh , where can I find the dashboard to publish HIT tasks?
There's a dropdown in /admin under Video objects to create the task and publish it in one step. I forget if there's another dashboard that just publishes HITs
@hocherie to expand on that, go to /admin, click Videos, select the check boxes next to the videos that you want to publish, and in the action menu drop down select "Publish to Turk" and then press the Go button.
@clayshieh @antingshen when selecting "Publish to Turk", I get the error:
Request Method: | POST
-- | --
http://ec2-35-182-160-204.ca-central-1.compute.amazonaws.com:5000/admin/annotator/video/
1.10
CommunicationError
XML malformed
./mturk/mturk_api.py in validate, line 315
/home/ubuntu/beaverdam/venv/bin/uwsgi
3.5.2
['.', '', '/home/ubuntu/beaverdam/venv/lib/python35.zip', '/home/ubuntu/beaverdam/venv/lib/python3.5', '/home/ubuntu/beaverdam/venv/lib/python3.5/plat-x86_64-linux-gnu', '/home/ubuntu/beaverdam/venv/lib/python3.5/lib-dynload', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/home/ubuntu/beaverdam/venv/lib/python3.5/site-packages']
Thu, 1 Nov 2018 13:13:50 -0700
I have set the AWS key and ID via:
export AWS_KEY="key"
export AWS_ID="id"
and have tried this with both sandbox set to true and false
Thanks!
It's hard to say. There's many reasons why AWS could be rejecting your request and they don't give you a clear error. See https://github.com/cvondrick/vatic/issues/12
In particular, the error I am seeing is:
<CreateHITResponse><OperationRequest><RequestId>1817f6d5-518e-4acb-830e-7d4111e7f4b6</RequestId><Errors><Error><Code>AWS.NotAuthorized</Code><Message>The identity contained in the request is not authorized to use this AWSAccessKeyId<PHONE_NUMBER>636)</Message></Error></Errors></OperationRequest></CreateHITResponse>'
I am exporting the correct key, ID, have all appropriate permissions for my IAM, and have tried placing the key and ID in my bashrc as well to no avail. At one point, I put in the wrong key and ID and changed it back to the correct one and it started being able to post videos to sandbox, but with sandbox set False, I am running into this issue.
Another note for future users, when BeaverDam was made, boto didn't support mturk in Python 3, so BeaverDam makes custom requests. Now, using boto3 might provide additional functionality and is the official way to connect to mturk.
@antingshen thanks for your note. Before I consider refactoring the code to use boto3 to connect, I have a follow up question:
As aforementioned, when trying to "Publish to Turk", when I have this config:
web client <-> nginx <-> socket <-> uWSGI <-> beaverdam
Along with the DEBUG=True error I showed above about XML malformed, I get the following error in my uwsgi logs when running sudo uwsgi --ini /etc/uwsgi/sites/beaverdam.ini:
'<?xml version="1.0"?>\n<CreateHITResponse><OperationRequest><RequestId>b17faffb-544c-44d4-a946-c0278f627623</RequestId><Errors><Error><Code>AWS.NotAuthorized</Code><Message>The identity contained in the request is not authorized to use this AWSAccessKeyId<PHONE_NUMBER>840)</Message></Error></Errors></OperationRequest></CreateHITResponse>'
HTTP Response = b'<?xml version="1.0"?>\n<CreateHITResponse><OperationRequest><RequestId>b17faffb-544c-44d4-a946-c0278f627623</RequestId><Errors><Error><Code>AWS.NotAuthorized</Code><Message>The identity contained in the request is not authorized to use this AWSAccessKeyId<PHONE_NUMBER>840)</Message></Error></Errors></OperationRequest></CreateHITResponse>'
However, publishing works fine and the AWS key and ID are accepted when going directly from browser to uwsgi, that is:
web client <-> uWSGI <-> beaverdam
Any help would be great. Thanks.
Hmm I haven't seen that error before. Did you check what AWS says that error means?
|
2025-04-01T06:37:51.427501
| 2016-08-02T21:40:13
|
168995422
|
{
"authors": [
"GabeArr",
"allenywang",
"szhu"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3477",
"repo": "antingshen/BeaverDam",
"url": "https://github.com/antingshen/BeaverDam/pull/55"
}
|
gharchive/pull-request
|
added dragging shortcuts #44
Sean and I added the new shortcuts for dragging. I also had to reassign some shortcuts for rewinding, forwarding, and deleting. Wasn't too sure what the new assignments should be though.
Good initial choices!
I would avoid using arrow keys actually
People use arrow keys to scroll the page (even though our page ideally doesn't need to scroll, people will get it confused either when they either use our app or another app where they need to scroll.
The arrow keys are used to edit text in the "time" box.
I think we should use the comma and period keys for forward and rewind because the keys have < and > on them, which look like forward and rewind. We can have / be the normal play/pause button bc it's right next to them.
Now that Chrome is not longer using Backspace for "back", we can use the backspace key for deleting keyframes.
Well, my rational for arrow keys comes from most video sites (i.e. youtube and etc.), so I don't really think that should be confusing. Not quite sure what you mean by editing text in the time box since if you edit it, it'd change the time.
I do like the idea of < and > though, Gabe an I were actually discussing that earlier.
I guess backspace is fine then if we only the more recent version of Chrome. However, one thing that I was looking to ask was should we remove ECMA6 destructuring in place of normal extraction? (i.e. getting rid of {...} = ...). The reason I say this is because that would open up Beaverdam to more people with older browsers (ones that support ECMA5). (In fact even my work laptop didn't have chrome updated in a while which is where I first came across the browser necessity).
Arrow keys are used for most video sites (i.e. youtube and etc.), so I don't really think that should be confusing (or at least instead of play/ rewind, we skip a few frames in either direction).
This is true. However, I actually dislike the YouTube shortcut because sometimes when I'm trying to scroll the page or do something else I accidentally move the playhead for this exact reason.
Not quite sure what you mean by editing text in the time box since if you edit it, it'd change the time.
I meant if you are trying to type in the text box and realize you need to move the cursor one char to the left, you press the left key and oops, you accidentally rewound your video.
I guess backspace is fine then if we only the more recent version of Chrome. However, one thing that I was looking to ask was should we remove ECMA6 destructuring in place of normal extraction? (i.e. getting rid of {...} = ...). The reason I say this is because that would open up Beaverdam to more people with older browsers (ones that support ECMA5). (In fact even my work laptop didn't have chrome updated in a while which is where I first came across the browser necessity).
That's an easy one. When we want to support older browsers, we'll use Babel to convert it to ES5 automatically. We use jQuery and Raphael instead of pure JS for similar reasons – they also act as compatibility layers.
|
2025-04-01T06:37:51.445479
| 2022-10-22T22:20:58
|
1419530657
|
{
"authors": [
"culler"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3478",
"repo": "antirez/linenoise",
"url": "https://github.com/antirez/linenoise/issues/208"
}
|
gharchive/issue
|
Cannot paste multiple lines into the example program.
Highlight and copy the 10 lines below:
This is line 1.
This is line 2.
This is line 3.
This is line 4.
This is line 5.
This is line 6.
This is line 7.
This is line 8.
This is line 9.
This is line 10.
Now run the example program and paste these10 lines into it. This is what you get:
$ ./linenoise_example
hello> This is line 1.
echo: 'This is line 1.'
hello>
Only the first line is received. Lines 2 - 9 are ignored. Each line should be printed with a prompt and a response. Pasting short code snippets into a REPL is a very common thing to do.
I think I found the explanation for this problem in the linux termios man page where it explains the TCSAFLUSH flag to tcsetattr:
the change occurs after all output written to the object
referred by fd has been transmitted, and all input that
has been received but not read will be discarded before
the change is made.
That flag is used in disableRawMode. So everything in the input buffer after the first newline gets discarded when disableRawMode is called after processing the first line.
The fix for this appears to be to use TCSADRAIN instead of TCSAFLUSH. That drains the output but does not discard the input.
|
2025-04-01T06:37:51.461373
| 2024-07-29T10:41:35
|
2435069227
|
{
"authors": [
"antonWetzel",
"m-haug"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3479",
"repo": "antonWetzel/prettypst",
"url": "https://github.com/antonWetzel/prettypst/issues/6"
}
|
gharchive/issue
|
Labels in argument lists
When formatting a label in an argument list, the label is attached to the comma following the previous argument or open brace, e.g.,
#set heading(numbering: "1.")
= Heading <label>
#ref( <label>
,
)
Instead, I would expect labels to be treated like other values in argument lists:
#set heading(numbering: "1.")
= Heading <label>
#ref(
<label>,
)
With b8870c069ee1c5b579a08e40ac4dac1c47915b4b the special handling for labels is only used in markup mode. If some issues persist, please reopen the issue.
|
2025-04-01T06:37:51.472877
| 2021-04-15T20:00:46
|
859206493
|
{
"authors": [
"Pkaps25",
"antonisdim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3480",
"repo": "antonisdim/haystac",
"url": "https://github.com/antonisdim/haystac/issues/3"
}
|
gharchive/issue
|
Analyse --mode reads error- missing script
Hello,
I am able to run haystac analyse --mode abundances, but haystac analyse --mode reads fails. I've included the snakemake log. My goal is to recover the reads that are being assigned to each category, i.e. dark and grey matter, but it looks like this code may have been refactored.
haystaclog.txt
I installed haystac through conda. Thank you for your help and please let me know if I can provide more information.
Best,
Peter
Hello Peter,
I hope you are doing great and apologies for not getting back to you earlier !
Thank you for using haystac and for raising this issue.
Indeed the code for haystac analyse --mode reads has been recently refactored and optimised as a new version will be released soon on conda.
I went through the log file and I was wondering if you could upload the likelihood matrix that haystac analyse --mode abundances has produced. From your log file it should be here /local/workdir/pk445/haystac/both_notrim_reads_.05/probabilities/both_sample/both_sample_likelihood_ts_tv_matrix.csv.
I will make sure I get back to you ASAP.
Thank you for your help and patience !
Best,
Antony
Hello Antony,
Thanks for the response. In the meantime, is there a workaround for using this feature?
|
2025-04-01T06:37:51.478751
| 2022-09-12T20:01:16
|
1370450064
|
{
"authors": [
"antonyrc6",
"bushku"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3481",
"repo": "antonyrc6/Antony-s-fast-page",
"url": "https://github.com/antonyrc6/Antony-s-fast-page/issues/6"
}
|
gharchive/issue
|
Creative Development
Quiz using app lab in code.org: https://studio.code.org/projects/applab/jau_YPo5u3K2kjagwJ9Ow50YORprzrWIiwIWRQOfrSw
Quiz plan post on my fast page: https://antonyrc6.github.io/Antony-s-fast-page/markdown/2022/09/11/quiz-plan.html
I bet Anthony deserves a 2.8. His quiz is completely organized. He even put up notifications to figure out if an answer is right or wrong. When the quiz was finished, the message said that all answers were correct. He did really well on the College Board Task as well.
|
2025-04-01T06:37:51.482699
| 2024-07-23T18:08:54
|
2425813524
|
{
"authors": [
"KMAnju-2021",
"XinShuYang",
"antoninbas",
"rajnkamr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3482",
"repo": "antrea-io/antrea",
"url": "https://github.com/antrea-io/antrea/pull/6543"
}
|
gharchive/pull-request
|
Add logs for the kind tests in jenkins
Closes: #6538
To prevent log files from piling up and consuming too much storage, you can implement a cleanup strategy. This can be done by periodically deleting old log files or by keeping only a limited number of the most recent log files.
To prevent log files from piling up and consuming too much storage, you can implement a cleanup strategy. This can be done by periodically deleting old log files or by keeping only a limited number of the most recent log files.
Files in the job's workspace will be cleaned at the beginning of the job. The retention period for job history (including saved log files) is configured by Jenkins, so there is no need to build another cleanup script.
@XinShuYang could you take another look and let me know if I can merge this?
@XinShuYang could you take another look and let me know if I can merge this?
Yes we can merge it.
/test-kind-all
/test-kind-all
it will trigger only ipv4 jobs, lemme confirm all other ipv6 and dual jobs once.
/test-kind-ipv6-conformance
/test-kind-ipv6-conformance
/test-kind-ipv6-all
/test-kind-ipv6-all
|
2025-04-01T06:37:51.490397
| 2022-03-08T21:20:18
|
1163139680
|
{
"authors": [
"antsouchlos",
"philsegeler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3483",
"repo": "antsouchlos/OxygenEngine2",
"url": "https://github.com/antsouchlos/OxygenEngine2/issues/23"
}
|
gharchive/issue
|
Obj file parser
Do we actually even really need our own file format?
Is there any additional information we put inside the *.csl files, or could we theoretically just write an *.obj parser and not have to use a custom blender plugin?
Yes we really do need our own file format.
.obj is a generic format that supports many things we do not support (for example quads instead of only triangles like .csl. Quads and NGons are made into triangles inside the Blender exporter automatically) Do you want to handle all of that inside the Interpreter with much higher performance requirements as well?
We would not have flexibility in how we store scenes, materials, shaders and textures.
How would we store animations and rigging.., or properties for the physics engine..., or .obj files linking to other .obj files to make open-world loading straightforward for example? I could list more but this is just off the top of my head. Or even packing textures inside the .csl. The possibilities are endless with our own format.
One would have to translate all the different .obj types to our own engine structure anyways, so why not do it through Blender and let Blender handle all the complexity of different file formats and have the exporter work for all of them? I have successfully imported a random .obj model from the internet into Blender and after a few adjustments of materials exported it to .csl and displayed it correctly in the renderer. Its name is some\_car.csl and can be found in the assets.
Every game engine has its own format and for good reason, why not we? :) The closest to a generic 3D game engine format would be COLLADA, but we would still lose flexibility and functionality and we cannot expect to support everything in COLLADA. Blender can still import COLLADA.
Assuming we adopted for example COLLADA, which would in theory be a better fit. What would we do with .obj files? Import them into Blender, export them into COLLADA and interpret them into our own structure? Adopt the COLLADA structure? Write yet another interpreter for .obj? For .fbx as well?
Blender supports custom object properties. We could use those to store various parameters (for example physics, storage or open-world related) and have them picked up by the exporter.
You could still make a limited direct .obj to .csl translator if you like, but i personally would not consider that high priority.
Compressed binary format when/how?
Summary: It is IMHO much simpler to write ONE exporter and ONE fast parser and interpreter for a flexible file format suited to our engine capabilities than limiting ourselves to any specific generic asset format. We would need to convert to our internal format anyways unless we completely rewrite everything yet again.
Do you need any more arguments or are these enough?
Ok ok, I get it. No obj parser :D
|
2025-04-01T06:37:51.496065
| 2022-05-05T10:18:12
|
1226479950
|
{
"authors": [
"xienas",
"zengyue"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3484",
"repo": "antvis/F2",
"url": "https://github.com/antvis/F2/issues/1464"
}
|
gharchive/issue
|
ScrollBar 横向滚动到最后
[ ] I have searched the issues of this repository and believe that this is not a duplicate.
What problem does this feature solve?
ScrollBar 横向滚动无法初始化的时候滚动到最后。
What does the proposed API look like?
boolean 开启是否滚动到最后
可以用 4.x, 设置下 range 就可以了
https://f2.antv.vision/zh/examples/line/line#pan
<ScrollBar mode="x" range={[0.8, 1]} />
感谢,可以啦
|
2025-04-01T06:37:51.514224
| 2021-02-15T18:20:08
|
808739329
|
{
"authors": [
"konstantinjdobler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3485",
"repo": "antvis/G6",
"url": "https://github.com/antvis/G6/issues/2657"
}
|
gharchive/issue
|
Smooth animation for data updates
[x] I have searched the issues of this repository and believe that this is not a duplicate.
What problem does this feature solve?
Is there a possibility to make a smooth transition between data updates in graph so that there is no "hard cut" but instead an animation? This would make it much easier for users to follow and understand updates to a graph (e.g. when filtering the data)
What does the proposed API look like?
A new option in GraphOptions
It is already possible, animate: True and change the data using changeData
|
2025-04-01T06:37:51.550383
| 2024-10-26T18:17:50
|
2616046791
|
{
"authors": [
"Pratikpawar13",
"abhi03ruchi",
"sailaja-adapa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3486",
"repo": "anuragverma108/SwapReads",
"url": "https://github.com/anuragverma108/SwapReads/pull/4062"
}
|
gharchive/pull-request
|
Notification page added
#3861
New Notification Page added
Fixes: #3861
Description
New Notification page is added. That page will make very easy to navigate notifications , the page sort the notifications into 4 categories that is All, Unread, Read, Archived. that interface easy to use as well as to understand.
#3861
Type of PR
[x] Feature enhancement
Screenshots / videos (if applicable)
Checklist:
[x] I have made this change from my own.
[x] I have taken help from some online resources.
[x] My code follows the style guidelines of this project.
[x] I have performed a self-review of my own code.
[x] I have commented my code, particularly in hard-to-understand areas.
[x] My changes generate no new warnings.
[x] I have tested the changes thoroughly before submitting this pull request.
[x] I have provided relevant issue numbers and screenshots after making the changes.
please record a video from home page like end to end @Pratikpawar13
are u working on this issue? gssoc ext is about to end.
work with updated branch and raise new PR @Pratikpawar13
|
2025-04-01T06:37:51.580437
| 2018-11-16T18:18:32
|
381708384
|
{
"authors": [
"johnmcclean"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3487",
"repo": "aol/cyclops",
"url": "https://github.com/aol/cyclops/issues/948"
}
|
gharchive/issue
|
Managed::with operator closes resource to early
Need to pass both the current instance and the transforming function down the chain rather than calling map.
To replicate assert the session is still open in HibernateTest (in 10.0.3 this will fail)
@Test
public void hibernate(){
Try<String, Throwable> res = Managed.of(factory::openSession)
.with(Session::beginTransaction)
.map((session, tx) ->{
try {
verify(session,never()).close();
}catch(Exception e) {
e.printStackTrace();
}
return deleteFromMyTable(session)
.bipeek(success -> tx.commit(),error -> tx.rollback());
} ).foldRun(Try::flatten);
assertThat(res,equalTo(Try.success("deleted")));
}
Merged
|
2025-04-01T06:37:51.585883
| 2023-05-13T05:35:57
|
1708444792
|
{
"authors": [
"Mghrabi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3488",
"repo": "apache/age-viewer",
"url": "https://github.com/apache/age-viewer/pull/143"
}
|
gharchive/pull-request
|
fixing scrolling issue mentioned in Issue #142
I approached the safer solution:
Made a border around the window screen to make it obvious that it's a screen
There were many solutions I could think of, but I approached the safest solution that taking into consideration the future development of that feature, I simply drew a border line around that screen to make it obvious to users that it's a screen, not a normal white space. Solutions I thought of doing:
I could have set the Height of the screen to auto and that should have solved the problem FOR NOW as the feature isn't already in use (from my understanding)
I could have made a simple code checking if the number of elements is 0, and if so it should remove that white space (screen) till the number of elements becomes at least 1, then shows the screen. But this solution will edit the code structure a bit and could be against the future plans as this screen functionality is still not 100% clear to me.
How it looks after putting the borderline:
|
2025-04-01T06:37:51.641819
| 2018-07-19T15:36:46
|
342782337
|
{
"authors": [
"asfgit",
"atkach"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3489",
"repo": "apache/ambari",
"url": "https://github.com/apache/ambari/pull/1813"
}
|
gharchive/pull-request
|
AMBARI-24316 Inconsistent Ambari warnings
How was this patch tested?
21825 passing (31s)
48 pending
retest this please
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/Ambari-Github-PullRequest-Builder/3190/
Test PASSed.
|
2025-04-01T06:37:51.679076
| 2022-07-26T06:24:14
|
1317756878
|
{
"authors": [
"AlinsRan",
"tao12345666333"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3490",
"repo": "apache/apisix-ingress-controller",
"url": "https://github.com/apache/apisix-ingress-controller/issues/1186"
}
|
gharchive/issue
|
proposal: need an interface or function to translate old resource objects
backgroup
At present, for ApisixRoute Ingress ... resource update, we use the TranslateUpstream method to translate old objects. This method requires parsing the latest services to build objects, resulting in data inconsistency.
Due to the wrong construction of upstream objects, upstream cannot be deleted, resulting in object redundancy.
proposal
Implement an interface or function for related resources
Get route object from cache.
Using route to construct Upstream and PluginConfig.
Compare and write APISIX.
dependencies
[ ] ApisixRoute #1177
[ ] Ingress
[ ] Gateway
[ ] HTTPRoute
[ ] ...
#1050 Or we can realize the function of regularly cleaning up redundant objects in upstream.
Using route to construct Upstream and PluginConfig.
why we need this one?
Using route to construct Upstream and PluginConfig.
why we need this one?
Because the route object contains upstream_ id and plugin_id. For delete event, we only need these fields
|
2025-04-01T06:37:51.691374
| 2021-05-17T17:03:49
|
893529927
|
{
"authors": [
"codecov-commenter",
"tao12345666333"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3491",
"repo": "apache/apisix-ingress-controller",
"url": "https://github.com/apache/apisix-ingress-controller/pull/459"
}
|
gharchive/pull-request
|
docs: fix APISIX helm installation
This change was introduced from
https://github.com/apache/apisix-helm-chart/pull/74
Signed-off-by: Jintao Zhang<EMAIL_ADDRESS>Please answer these questions before submitting a pull request
Why submit this pull request?
[x] Bugfix
[ ] New feature provided
[ ] Improve performance
[ ] Backport patches
Related issues
Bugfix
Description
How to fix?
New feature or improvement
Describe the details and related test reports.
Backport patches
Why need to backport?
Source branch
Related commits and pull requests
Target branch
Codecov Report
Merging #459 (56aefa5) into master (5d479ae) will decrease coverage by 0.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #459 +/- ##
==========================================
- Coverage 37.04% 37.03% -0.02%
==========================================
Files 47 46 -1
Lines 3841 3840 -1
==========================================
- Hits 1423 1422 -1
Misses 2233 2233
Partials 185 185
Impacted Files
Coverage Δ
test/e2e/e2e.go
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4a55307...56aefa5. Read the comment docs.
|
2025-04-01T06:37:51.694222
| 2021-01-16T13:38:32
|
787468217
|
{
"authors": [
"imjoey"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3492",
"repo": "apache/apisix-website",
"url": "https://github.com/apache/apisix-website/pull/144"
}
|
gharchive/pull-request
|
chore: Fix incorrect lilnks
Signed-off-by: imjoey<EMAIL_ADDRESS>Changes:
Fix the broken the links for Contributor Guide and Committer Guide in team.md
Add missing link in committer-guide.md
ping @juzhiyuan @liuxiran @nic-chen for reviewing. Thanks.
ping @juzhiyuan @liuxiran @nic-chen for reviewing. Thanks.
|
2025-04-01T06:37:51.724991
| 2024-08-27T10:38:36
|
2488951572
|
{
"authors": [
"alexandreyc",
"lidavidm",
"rz-vastdata"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3493",
"repo": "apache/arrow-adbc",
"url": "https://github.com/apache/arrow-adbc/issues/2104"
}
|
gharchive/issue
|
dev/release: publish adbc_core crate on https://crates.io
What would you like help with?
Would it please be possible to publish the latest release of adbc_core on https://crates.io?
[ ] Update release docs
[ ] Update release scripts
[ ] Check if we can do a prerelease
[ ] Update verification script
CC: @alexandreyc
I'm planning a release this week. I'll give it a shot.
Many thanks @lidavidm!
Thanks @lidavidm
If you need some help don't hesitate to reach out!
Thanks!
|
2025-04-01T06:37:51.729766
| 2023-12-27T08:38:31
|
2057083996
|
{
"authors": [
"VungleTienan",
"marvinlanhenke"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3494",
"repo": "apache/arrow-datafusion",
"url": "https://github.com/apache/arrow-datafusion/issues/8660"
}
|
gharchive/issue
|
Cannot find the fields when making an aggregation on parquet file.
Describe the bug
Hey there. It seems that datafusion cannot recognize the field name existence when making an aggregation on a parquet file. The code fail to run with following error:
Error: SchemaError(FieldNotFound { field: Column { relation: None, name: "fl_date" }, valid_fields: [Column { relation: Some(Bare { table: "?table?" }), name: "FL_DATE" }, Column { relation: Some(Bare { table: "?table?" }), name: "DEP_DELAY" }, Column { relation: Some(Bare { table: "?table?" }), name: "FL_DATE" }, Column { relation: Some(Bare { table: "?table?" }), name: "DEP_DELAY" }] })
Maybe I was making some mistakes?
To Reproduce
Download the flights 1m data:
https://www.tablab.app/datasets/sample/parquet
Run the code below:
use datafusion::{
arrow::datatypes::{DataType, Field, Schema},
prelude::*,
};
#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
let ctx: SessionContext = SessionContext::new();
let schema = Schema::new(vec![
Field::new("FL_DATE", DataType::Utf8, true),
Field::new("DEP_DELAY", DataType::Int32, true),
]);
let df = ctx
.read_parquet(
"../../dataset/flights.parquet",
ParquetReadOptions::default().schema(&schema),
)
.await?;
let df = df
.select_columns(&["FL_DATE", "DEP_DELAY"])?
.aggregate(vec![col("FL_DATE")], vec![sum(col("DEP_DELAY"))])?;
df.show().await?;
Ok(())
}
Expected behavior
The aggregated data is displayed.
Additional context
cargo.toml
[package]
name = "data_engines"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
datafusion = "34"
tokio = { version = "1.35.1", features = ["full"] }
@VungleTienan
You simply need to use raw strings in order to prevent the normalizing of the column names.
let df = df
.select_columns(&["FL_DATE", "DEP_DELAY"])?
.aggregate(vec![col(r#""FL_DATE""#)], vec![sum(col(r#""DEP_DELAY""#))])?;
Thanks so much @marvinlanhenke
@VungleTienan I think you can close the issue now? Best regards
|
2025-04-01T06:37:51.736104
| 2021-04-26T12:43:45
|
867662415
|
{
"authors": [
"alamb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3495",
"repo": "apache/arrow-rs",
"url": "https://github.com/apache/arrow-rs/issues/161"
}
|
gharchive/issue
|
Array sum result is wrong with remainder fields when simd is on
Note: migrated from original JIRA: https://issues.apache.org/jira/browse/ARROW-11051
Minimal example
{quote}use arrow::{array::PrimitiveArray, datatypes::Int64Type};
fn main() {
let mut s = vec![];
for _ in 0..32
{
s.push(Some(1i64)); s.push(None);
}
let v: PrimitiveArray = s.into();
dbg!(arrow::compute::sum(&v));
}
{quote}
dependency
{quote}arrow = {version = "2", features = ["simd"]}
{quote}
The following code in compute::sum is wrong. The bit mask is checked reversed.
{quote} remainder.iter().enumerate().for_each(|(i, value)| {
if remainder_bits & (1 << i) != 0 {
remainder_sum = remainder_sum + *value;
}
});
{quote}
Comment from Jörn Horstmann(jhorstmann) @ 2021-01-01T10:11:26.248+0000:
Hi [~niuzr], could you try the same with the latest master branch? There were some changes and also a bugfix how the vector masking is calculated after the 2.0 release in ARROW-10216.
|
2025-04-01T06:37:51.739760
| 2024-09-20T16:39:43
|
2539221667
|
{
"authors": [
"alamb",
"progval"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3496",
"repo": "apache/arrow-rs",
"url": "https://github.com/apache/arrow-rs/issues/6428"
}
|
gharchive/issue
|
Add a method to return the number of skipped rows in a RowSelection
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
RowSelection has a row_count method that returns the number of selected rows, but is missing a way to count the number of de-selected rows without iterating on the selectors
Describe the solution you'd like
Implement it as RowSelection::skipped_row_count
Describe alternatives you've considered
current state, which is that users have to reimplement it themselves
Additional context
Datafusion had to implement it here:
https://github.com/apache/datafusion/blob/f2159e6cae658a0a3f561ec2d15ea948213fd0f8/datafusion/core/src/datasource/physical_plan/parquet/page_filter.rs#L271-L277
Suggested by @alamb here: https://github.com/apache/datafusion/pull/12545#discussion_r1768748882
label_issue.py automatically added labels {'parquet'} from #6429
|
2025-04-01T06:37:51.742762
| 2024-08-07T18:24:39
|
2454071672
|
{
"authors": [
"alamb",
"gstvg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3497",
"repo": "apache/arrow-rs",
"url": "https://github.com/apache/arrow-rs/pull/6209"
}
|
gharchive/pull-request
|
fix: Correctly handle take on dense union of a single selected type
Which issue does this PR close?
Closes #6206.
What changes are included in this PR?
At #5873, I naively called filter_primitive instead of filter to avoid arcing and downcasting, but this bypass the check for when all values match the predicate, which filter_primitive expects to happen, leading unreachable!() to be hit
This PR calls filter instead of filter_primitive, and removes the added pub(crate) from filter_primitive to avoid future misuse.
Are there any user-facing changes?
No
Thank you @gstvg and @tustvold
|
2025-04-01T06:37:51.747682
| 2023-04-15T18:15:30
|
1669520000
|
{
"authors": [
"joshbode",
"sometimesabird",
"westonpace"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3498",
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/issues/35156"
}
|
gharchive/issue
|
write_dataset freezes
Describe the bug, including details regarding any error messages, version, and platform.
An otherwise perfectly functioning arrow dataset does not finish the command write_dataset when passing on a hive structure, and I have to interrupt R. Looking at the folder structure, it seems to be writing files perfectly well until some point after which no new files are written -- but the job isn't finished.
The dataset also writes well into a single file (write_dataset without partitioning or grouping). It also writes well when I create less groups that I would like to. I haven't seen anyone complain about this, so I suspect that I am doing something so silly that no one has attempted before. Am I creating too many groups?
Grouping that works: A, B, C, D, E, where all groups are binary.
Grouping that doesn't work: A, B, C, D, X, where X has 90+ values (and not all values exist for each level of other variable. So, say, a combination A=1, B=1, C=1, D=1 might not have X=67.
Grouping that crashes: X, A, B, C, D.
In fact, just splitting into X crashes (not freezes).
I am on Garuda Linux (Arch-based) with R version 4.2.3.
Component(s)
R
Are you able to capture a core dump or create a small script that reproduces this? Which version of Arrow are you using?
I can confirm similar behaviour with Python using pyarrow==12.0.1 with both write_dataset and the older write_to_dataset with a large number of partitions (over 5000 in my case). I'll post more details and try to dig in a bit deeper, but for now, this is mostly just to say "you're not alone" :)
Another thing to check is to monitor memory. write_dataset, if it runs long enough, will fill up the OS's disk cache. This can often lead to swapping / etc which can cause the entire system to freeze and run slowly.
Also, if you can create any kind of reproducible example we can take a look further.
|
2025-04-01T06:37:51.775546
| 2024-10-23T15:59:43
|
2609115906
|
{
"authors": [
"kolfild26",
"zanmato1984"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3499",
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/issues/44513"
}
|
gharchive/issue
|
Pyarrow.Table.join() breaks on large tables v.18.0.0.dev486
Describe the bug, including details regarding any error messages, version, and platform.
Hi,
In my task I need to join two tables. One of 18m rows and another of 487m rows.
t_18m.join(t_487m, keys=[''col1, 'col2', 'col3'], join_type="left outer")
I was using the most actual pyarrow version which is 17 at the moment. While performing a join it breaks with Segmentation fault (core dumped)
I tried to investigate and found the most recent version w/o such behaviour is v13. But I instead of Segmentation fault it:
either silently produces wrong result
or breaks with ! Invalid: There are more than 2^32 bytes of key data. Acero cannot process a join of this magnitude
Next I searched on the github issues and found there are many similar user cases around. That's why I didn't include to many details in this report. You probably know these issues well.
There was #43495 the enchancement request, which as far as I understand has been included to the v18. I installed the v.18.0.0.dev486 package in order to test, but unfortunately It still throws the segmentation fault error on my case.
So if the enchancement is already merged into v.18.00 it still does not fix the problem.
Component(s)
Python
I also tryied to reduce the size of the right table and the working limit actually varies for me. Not able to find the exact number. I'm getting either the seg fault or the join result is incorrect.
My system has 4Tb memory in total, so it's not connected to the out-of-memory issue.
Here is the other system specs:
Oracle Linux Server 7.8
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16511255
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Python 3.10.15
import pyarrow as pa
pa.version
'18.0.0.dev486'
Hi @kolfild26 , thanks for reporting this.
There are lots of solved issues from v13 to v18 that may cause silent wrong answer or segfault in hash join, and possibly more unrevealed ones as well. So it is not too surprising that different versions behave differently.
Could you please provide us the complete schemas and the estimated sizes of both tables? And better yet, could you give a more-or-less working limit of your case? These are essential informations to investigate this issue.
Also, there might be a workaround that worth a try, change t_18m.join(t_487m, keys=[''col1, 'col2', 'col3'], join_type="left outer") to t_487m.join(t_18m, keys=[''col1, 'col2', 'col3'], join_type="right outer"). (I assume t_18m is much smaller than t_487m and this will make our hash join to use the small table to build the hash table.)
Thanks.
Hi @kolfild26 , thank you for the feedback and further information. I'll try to reproduce the issue. However it will be helpful if you can supply the following information as well:
Any stacktrace of the segfault;
The join cardinality, or equally, the number of rows of the (left/right) join result.
Hi @kolfild26 , thank you for the feedback and further information. I'll try to reproduce the issue. However it will be helpful if you can supply the following information as well:
Any stacktrace of the segfault;
The join cardinality, or equally, the number of rows of the (left/right) join result.
My first attempt to reproduce the issue using non-null arbitrarily random distributed columns at the same schema and scale, failed (that is, my test passed w/o segfault). So I also need information about the distributions of each key column: null probability, min/max, any high cardinality value. Thank you.
@zanmato1984
Stacktrace:
Dec 16 01:07:44 kernel: python[37938]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3f10b09018 error 4 in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37971]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f002b0018 error 4
Dec 16 01:07:44 kernel: python[37961]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3f052d0018 error 4 in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37957]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f072d8018 error 4
Dec 16 01:07:44 kernel: python[37940]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3f0fb07018 error 4
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37974]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3d18f6d018 error 4 in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37966]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f02abf018 error 4
Dec 16 01:07:44 kernel: python[37951]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f0a2ec018 error 4
Dec 16 01:07:44 kernel: python[37973]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3efb7fe018 error 4
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37953]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f092e6018 error 4
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 abrt-hook-ccpp: Process 35963 (python3.10) of user 1000 killed by SIGSEGV - dumping core
Here is the tables's statistics:
Script to get stats
import pyarrow as pa
import pyarrow.compute as pc
import pandas as pd
import pyarrow.types as patypes
def get_column_distributions(table):
distributions = {}
total_rows = table.num_rows
for column in table.schema.names:
col_data = table[column]
null_count = pc.sum(pc.is_null(col_data)).as_py()
null_percentage = (null_count / total_rows) * 100 if total_rows > 0 else 0
# Compute the cardinality (unique count / total count)
unique_count = pc.count_distinct(col_data.filter(pc.is_valid(col_data))).as_py()
cardinality_percentage = round((unique_count / total_rows)*100,3) if total_rows > 0 else 0
if patypes.is_integer(col_data.type) or patypes.is_floating(col_data.type):
stats = {
"count": pc.count(col_data).as_py(),
"nulls": null_count,
"null_percentage": null_percentage,
"cardinality_percentage": cardinality_percentage,
"min": pc.min(col_data).as_py(),
"max": pc.max(col_data).as_py(),
}
elif patypes.is_string(col_data.type) or patypes.is_binary(col_data.type):
value_counts = pc.value_counts(col_data.filter(pc.is_valid(col_data)))
stats = {
"nulls": null_count,
"null_percentage": null_percentage,
"cardinality_percentage": cardinality_percentage,
"value_counts": value_counts.to_pandas().to_dict("records"),
}
else:
stats = {
"nulls": null_count,
"null_percentage": null_percentage,
"cardinality_percentage": cardinality_percentage,
"message": f"Statistics not supported for type: {col_data.type}"
}
distributions[column] = stats
return distributions
small
large
Would it be easier if I attached the tables here?
Would it be easier if I attached the tables here?
@kolfild26 yeah please, that's even more useful.
The join cardinality, or equally, the number of rows of the (left/right) join result.
And also, do you have this one?
Cardinality can refer to different things. In a database context, cardinality usually refers to the number of unique values in a relational table column relative to the total number of rows in the table. So, if are both talking about the same, cardinality is presented in the report above, cardinality_percentage = (unique_count / total_rows)*100
But "cardinality" can also represent the size of the join result which is what I originally asked about. Do you have that? (You can just run the right join and count the number of rows).
And thank you for the source files. I'll try to reproduce the issue using these files in my local.
Hi @kolfild26 , I've successfully run the case in my local (M1 MBP with 32GB memory, arrow 18.1.0) but didn't reproduce the issue.
My python script:
import pandas
import pickle
import pyarrow
def main():
print("pandas: {0}, pyarrow: {1}".format(pandas.__version__, pyarrow.__version__))
with open('small.pkl', 'rb') as f: small = pickle.load(f)
with open('large.pkl', 'rb') as f: large = pickle.load(f)
print("small size: {0}, large size: {1}".format(small.num_rows, large.num_rows))
join = small.join(large, keys=['ID_DEV_STYLECOLOR_SIZE', 'ID_DEPARTMENT', 'ID_COLLECTION'], join_type='left outer')
print("join size: {0}".format(join.num_rows))
if __name__ == "__main__":
main()
Result:
python test.py
pandas: 2.2.3, pyarrow: 18.1.0
small size: 18201475, large size: 360449051
join size: 18201475
Did I miss something?
The resulted join size looks correct.
Could you please check:
apply filter ID_DEV_STYLECOLOR_SIZE =<PHONE_NUMBER>9 and ID_DEPARTMENT =<PHONE_NUMBER>9. It should return 2 in PL_VALUE column.
Apply sum(PL_VALUE) and it should return 58360744
That's just to eliminate 'false positive'. I mentioned that I tested on different versions and it sometimes caused a silent wrong answer even though there were no seg.fault.
If all above is correct, might the segfault error be caused by any system/os settings?
my setup
Oracle Linux Server 7.8
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16511255
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Python 3.10.15
import pyarrow as pa
pa.version
'18.1.0'
apply filter ID_DEV_STYLECOLOR_SIZE =<PHONE_NUMBER>9 and ID_DEPARTMENT =<PHONE_NUMBER>9. It should return 2 in PL_VALUE column.
Correct:
>>> cond = pc.and_(pc.equal(large['ID_DEV_STYLECOLOR_SIZE'],<PHONE_NUMBER>9), pc.equal(large['ID_DEPARTMENT'],<PHONE_NUMBER>9))
>>> filtered = large.filter(cond)
>>> print(filtered)
pyarrow.Table
ID_DEV_STYLECOLOR_SIZE: int64
ID_DEPARTMENT: int64
ID_COLLECTION: int64
PL_VALUE: int64
----
ID_DEV_STYLECOLOR_SIZE: [[88506230299]]
ID_DEPARTMENT: [[16556030299]]
ID_COLLECTION: [[11240299]]
PL_VALUE: [[2]]
>
Apply sum(PL_VALUE) and it should return 58360744
No:
>>> sum = pc.sum(large['PL_VALUE'])
>>> print(sum)
461379027
That's just to eliminate 'false positive'. I mentioned that I tested on different versions and it sometimes caused a silent wrong answer even though there were no seg.fault.
Hmm, I think we should only focus on v18.1.0. As I mentioned, there are a lot of fixes ever since, so the behavior in prior versions will vary for sure, and I think most of the issues (if not all) are already addressed.
If all above is correct, might the segfault error be caused by any system/os settings?
I also verified on my Intel MBP (I just realized that we have x86-specialized SIMD code path for hash join so I wanted to see if the issue was there), but still unable to reproduce. And your setup doesn't seem to have any particular thing to do with this issue.
To proceed with the debugging:
Did you run my python script on your env to see if it runs into segfault? (And in case it doesn't, would you kindly help to fix it to make the segfault happen?) I think this is quite essential, because we need to agree on a minimal reproducible case (at least on either env of us). Then I can ask some other people to help verifying on broader environments.
Would you help to confirm the difference of sum(PL_VALUE) in my run (461379027) against yours (58360744)?
What is your CPU model?
In your original run of segfault (again, on v18.1.0), is it always reproducible or by chance?
Debugging this kind of issue is tricky and takes time and communication. I really appreciate your patience @kolfild26 , thank you!
2️⃣ I meant filter() and sum() to be applied to the resulted table, i.e. join while you have applied to large.
3️⃣ Intel(R) Xeon(R) Gold 6246 CPU @ 3.30GHz. 4 sockets * 12 cores = 48 logical cpus
1️⃣ 4️⃣ Yes, segfault occures always, having the fixed size of the input tables. All recent tests I refer to, are on v18.1.0
I can now confirm that the problem does exist.
By applying filter and sum on the join result, I found my previous non-segfault runs were false positive:
join = small.join(large, keys=['ID_DEV_STYLECOLOR_SIZE', 'ID_DEPARTMENT', 'ID_COLLECTION'], join_type='left outer')
print("join size: {0}".format(join.num_rows))
cond = pc.and_(pc.equal(join['ID_DEV_STYLECOLOR_SIZE'],<PHONE_NUMBER>9), pc.equal(join['ID_DEPARTMENT'],<PHONE_NUMBER>9))
filtered = join.filter(cond)
print("filtered")
print(filtered)
sum = pc.sum(join['PL_VALUE'])
print("sum")
print(sum)
Result:
filtered: PL_VALUE: [[null]]
...
sum: 33609597 # Another run emits 33609997
And I also happen to have access to a x86 Ubuntu desktop, on which I reproduced the segfault.
I'm now digging into it.
Also, considering the silent wrong answer on some platforms, I'm marking this issue critical.
Thanks alot @kolfild26 for helping me to reproduce the issue!
|
2025-04-01T06:37:51.789818
| 2023-11-14T07:55:57
|
1992182622
|
{
"authors": [
"Tom-Newton",
"kou"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3500",
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/pull/38708"
}
|
gharchive/pull-request
|
GH-38699: [C++][FS][Azure] Implement CreateDir()
Rationale for this change
It seems that we can't create a directory explicitly without hierarchical namespace support.
It seems that Azure Blob Storage supports only virtual directory. There is no directory. If a file (blob) name has "/", it's treated that the file (blob) exists under a virtual directory.
It seems that Azure Data Lake Storage Gen2 supports a real directory.
See also:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
What changes are included in this PR?
This change chooses the following behavior:
Container can be created with/without hierarchical namespace support.
Directory can be created with hierarchical namespace support.
Directory can't be created without hierarchical namespace support. So do nothing without hierachical namespace support. (arrow::Status::OK() is just returned.)
Are these changes tested?
Azurite doesn't support hierarchical namespace yet. So I can't test the implementation for hierarchical namespace yet. Sorry.
Are there any user-facing changes?
Yes.
Closes: #38699
@Tom-Newton What do you think about this behavior?
This change chooses the following behavior:
Container can be created with/without hierarchical namespace support.
Directory can be created with hierarchical namespace support.
Directory can't be created without hierarchical namespace support. (arrow::Status::NotImplemented is returned for this case.)
Do you have any (simple) document how to setup an account for AzureHierarchicalNamespaceFileSystemTest?
What do you think about this behavior?
The only part I'm questioning is whether to return arrow::Status::NotImplemented on flat blob storage accounts. Possibly it would be better just to return a success status without doing anything.
I think the right choice will depend on how much CreateDir() is used. For example if CreateDir() is used every time arrow writes a partitioned parquet table, then returning an error status could be a bit of a problem.
Do you have any (simple) document how to setup an account for AzureHierarchicalNamespaceFileSystemTest?
You will need an azure account. You should be able to create a free account at https://azure.microsoft.com/en-gb/free/. You should the. Be able to create a storage account through the portal web UI.
I can probably write a more specific doc if needed but this is Azure's doc https://learn.microsoft.com/en-us/azure/storage/blobs/create-data-lake-storage-account
A few suggestions on configuration:
Use Standard general-purpose v2 not premium
Use LRS redundancy
Obviously you will want to enable hierarchical namespace.
Set the default access tier to hot
SFTP, NFS and file shares are not required.
The only part I'm questioning is whether to return arrow::Status::NotImplemented on flat blob storage accounts. Possibly it would be better just to return a success status without doing anything.
I think the right choice will depend on how much CreateDir() is used. For example if CreateDir() is used every time arrow writes a partitioned parquet table, then returning an error status could be a bit of a problem.
Good point! I'll change the behavior to just return arrow::Status::OK().
Do you have any (simple) document how to setup an account for AzureHierarchicalNamespaceFileSystemTest?
You will need an azure account. You should be able to create a free account at https://azure.microsoft.com/en-gb/free/. You should the. Be able to create a storage account through the portal web UI.
I can probably write a more specific doc if needed but this is Azure's doc https://learn.microsoft.com/en-us/azure/storage/blobs/create-data-lake-storage-account
A few suggestions on configuration:
...
Thanks! I could create an account and confirm that the implementation passes the added tests.
I'll add the provided information as a comment for other developers.
I've updated.
I've updated.
I'll merge this.
|
2025-04-01T06:37:51.793298
| 2024-01-09T15:56:58
|
2072664306
|
{
"authors": [
"jorisvandenbossche",
"raulcd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3501",
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/pull/39538"
}
|
gharchive/pull-request
|
GH-39537: [Packaging][Python] Add a numpy<2 pin to the install requirements for the 15.x release branch
Rationale for this change
PyArrow wheels for the 15.0.0 release will not be compatible with future numpy 2.0 packages, therefore it is recommended to add this upper pin now for releases. We will keep the more flexible pin on the development branch (by reverting this commit on main, but so it can be cherry-picked in the release branch)
Closes: #39537
@github-actions crossbow submit -g wheel -g python
I think the macOS wheel failures are unrelated and started happening yesterday, probably due to: https://github.com/apache/arrow/pull/39065 being merged. I'll investigate further and open an issue.
We can remove this pin once we build our wheels with NumPy 2, right?
Yes, indeed.
|
2025-04-01T06:37:51.802239
| 2022-06-24T10:34:33
|
1283581668
|
{
"authors": [
"chiransiriwardhana",
"manuzhang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3502",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/issues/22040"
}
|
gharchive/issue
|
What is the version of beam-runners-flink-1.11 which support JDK 17
What needs to happen?
Need to know version of beam-runners-flink-1.11 which support JDK 17
Issue Priority
Priority: 1
Issue Component
Component: beam-community
Like #22041, please ask on<EMAIL_ADDRESS>
|
2025-04-01T06:37:51.809636
| 2024-03-07T20:20:00
|
2174715374
|
{
"authors": [
"Abacn",
"AdalbertMemSQL"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3503",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/issues/30564"
}
|
gharchive/issue
|
[Failing Test]: PostCommit Java SingleStoreIO IT failing
What happened?
Since Jan 31, 2024
Fails Install Singlestore cluster
Run kubectl apply -f /runner/_work/beam/beam/.test-infra/kubernetes/singlestore/sdb-cluster.yaml
memsqlcluster.memsql.com/sdb-cluster created
error: timed out waiting for the condition on memsqlclusters/sdb-cluster
Error: Process completed with exit code 1.
Performance test also failing
Issue Failure
Failure: Test is continually failing
Issue Priority
Priority: 2 (backlog / disabled test but we think the product is healthy)
Issue Components
[ ] Component: Python SDK
[ ] Component: Java SDK
[ ] Component: Go SDK
[ ] Component: Typescript SDK
[ ] Component: IO connector
[ ] Component: Beam YAML
[ ] Component: Beam examples
[ ] Component: Beam playground
[ ] Component: Beam katas
[ ] Component: Website
[ ] Component: Spark Runner
[ ] Component: Flink Runner
[ ] Component: Samza Runner
[ ] Component: Twister2 Runner
[ ] Component: Hazelcast Jet Runner
[ ] Component: Google Cloud Dataflow Runner
workload logs:
Status: Pods have warnings. node-sdb-cluster-master
2024-03-07 15:19:31.496 EST
✓ Created node with node ID A40217C2599E6693E3D37C2BCB195DA378E230AA
2024-03-07 15:19:31.908 EST
memsqlctl will perform the following actions:
2024-03-07 15:19:31.908 EST
· Update configuration setting on node with node ID A40217C2599E6693E3D37C2BCB195DA378E230AA on port 3306
2024-03-07 15:19:31.908 EST
- Update node config file with setting minimum_core_count=0
2024-03-07 15:19:31.908 EST
{}
2024-03-07 15:19:31.908 EST
Would you like to continue? [Y/n]:
2024-03-07 15:19:31.908 EST
Automatically selected yes, non-interactive mode enabled
...
2024-03-07 15:19:38.746 EST
2024-03-07 20:19:38.746 INFO: Thread 115121 (ntid 225, conn id -1): memsqld_main: Flavor: 'production'
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.757 EST
2024-03-07 20:19:38.757 INFO: Thread 115121 (ntid 225, conn id -1): CreateDatabase: CREATE DATABASE `memsql` with sync durability / sync input durability, 0 partitions, 0 sub partitions, 0 logical partitions, log file size 16777216.
...
2024-03-07 15:19:40.691 EST
Started singlestore (199)
2024-03-07 15:19:40.694 EST
Ensuring the root password is setup
2024-03-07 15:19:40.787 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:19:40.845 EST
2024-03-07 20:19:40.845 INFO: Thread 115120 (ntid 344, conn id -1): OnAsyncCompileCompleted: Query information_schema.'SELECT 1' submitted 177 milliseconds ago, queued for 17 milliseconds, compiled asynchronously in 160 milliseconds
2024-03-07 15:19:40.847 EST
2024-03-07 20:19:40.847 ERROR: [0 messages suppressed] ProcessHandshakeResponsePacket() failed. Sending back 1045: Access denied for user 'root'@'localhost' (using password: NO)
...
2024-03-07 15:19:41.181 EST
2024-03-07 20:19:41.181 INFO: Thread 115120 (ntid 344, conn id -1): OnAsyncCompileCompleted: Query (null).'SELECT @@MEMSQL_VERSION' submitted 133 milliseconds ago, queued for 17 milliseconds, compiled asynchronously in 116 milliseconds
2024-03-07 15:19:50.497 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:00.494 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:10.569 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:20.496 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:30.496 EST
Error 2277: This node is not part of the cluster.
Status: Pods have warnings. node-sdb-cluster-leaf-ag1
2024-03-07 15:19:49.683 EST
Initializing OpenSSL 1.0.2u-fips 20 Dec 2019
2024-03-07 15:19:49.688 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:19:49.704 EST
[2024-03-07 20:19:49 startup-probe] Aborting due to query failure
2024-03-07 15:19:49.808 EST
2024-03-07 20:19:49.808 INFO: Thread 115120 (ntid 388, conn id -1): OnAsyncCompileCompleted: Query (null).'select @@version_comment limit 1' submitted 142 milliseconds ago, queued for 17 milliseconds, compiled asynchronously in 125 milliseconds
2024-03-07 15:19:54.664 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:19:54.669 EST
[2024-03-07 20:19:54 startup-probe] Aborting due to query failure
...
2024-03-07 15:24:34.665 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:24:34.671 EST
[2024-03-07 20:24:34 startup-probe] Aborting due to query failure
2024-03-07 15:24:39.661 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:24:39.667 EST
[2024-03-07 20:24:39 startup-probe] Aborting due to query failure
2024-03-07 15:24:42.829 EST
2024-03-07 20:24:42.829 ERROR: Thread 115101 (ntid 408, conn id -1): Run: Error getting cluster database
2024-03-07 15:24:42.829 EST
2024-03-07 20:24:42.829 ERROR: Thread 115103 (ntid 406, conn id -1): Run: Error getting cluster database
2024-03-07 15:24:42.829 EST
2024-03-07 20:24:42.829 ERROR: Thread 115104 (ntid 405, conn id -1): Run: Error getting cluster database
The k8s configurations has been changed for months but the cluster failing to create suddenly since Jan 31. CC: @AdalbertMemSQL the author
Hey @Abacn
Is it possible to somehow retrieve full workload logs?
Fixed by #30725
|
2025-04-01T06:37:51.817490
| 2024-11-04T21:22:45
|
2633874265
|
{
"authors": [
"damccorm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3504",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/issues/33012"
}
|
gharchive/issue
|
[Task]: Improve Enrichment docs
What needs to happen?
There are a few targeted fixes needed for the Enrichment docs:
https://beam.apache.org/documentation/transforms/python/elementwise/enrichment/ should mention BigQuery
https://beam.apache.org/documentation/transforms/python/elementwise/enrichment/ should describe how we do batching and how we do caching (with_redis_cache)
https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/bigtable_enrichment_transform.ipynb and https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/vertex_ai_feature_store_enrichment.ipynb should elaborate a little more on what a cross-join means in this context (maybe a picture would be nice?)
In https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/bigtable_enrichment_transform.ipynb the example handler with composite row key support link is dead
Issue Priority
Priority: 2 (default / most normal work should be filed as P2)
Issue Components
[X] Component: Python SDK
[ ] Component: Java SDK
[ ] Component: Go SDK
[ ] Component: Typescript SDK
[ ] Component: IO connector
[ ] Component: Beam YAML
[ ] Component: Beam examples
[ ] Component: Beam playground
[ ] Component: Beam katas
[X] Component: Website
[ ] Component: Infrastructure
[ ] Component: Spark Runner
[ ] Component: Flink Runner
[ ] Component: Samza Runner
[ ] Component: Twister2 Runner
[ ] Component: Hazelcast Jet Runner
[ ] Component: Google Cloud Dataflow Runner
@claudevdm this would be good to pick up at some point when you have space (don't drop other things, just when this fits in nicely)
|
2025-04-01T06:37:51.855415
| 2020-03-31T01:30:19
|
590715267
|
{
"authors": [
"TheNeuralBit",
"apilloud",
"robinyqiu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3505",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/11272"
}
|
gharchive/pull-request
|
[BEAM-9641] Support ZetaSQL DATE type as a Beam LogicalType
This PR adds support of all ZetaSQL (BigQuery Standard SQL) DATE functions to BeamSQL:
CURRENT_DATE
EXTRACT
DATE (constructing DATE from DATETIME not supported)
DATE_ADD
DATE_SUB
DATE_DIFF
DATE_TRUNC
FORMAT_DATE
PARSE_DATE
UNIX_DATE
DATE_FROM_UNIX_DATE
WEEK part not supported
r: @apilloud
cc: @TheNeuralBit @kennknowles
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
What do you think about going ahead and defining the date logical type in org.apache.beam.sdk.schemas.logicaltypes? It would be useful in other contexts - for example it would give us something to map Avro's logical date type to (currently it is just overloaded with millis-instant onto DATETIME)
cc: @reuvenlax
What do you think about going ahead and defining the date logical type in org.apache.beam.sdk.schemas.logicaltypes? It would be useful in other contexts - for example it would give us something to map Avro's logical date type to (currently it is just overloaded with millis-instant onto DATETIME)
Done. Thanks for the suggestion. I made the Date type a public logical type in org.apache.beam.sdk.schemas.logicaltypes and added a layer of indirection by letting SqlTypes.DATE reference it.
Oops, forgot to include in my comments: ZetaSQL's range is much smaller than the underlying type, can you add a test or two for that? How do out of range values fail? (Also worth asking, do we need any special treatment for boundary conditions (LocalDate.MIN, LocalDate.MAX)? Probably not for now.)
Ah, just realized that the previous comments were not sent out.
Could you help trigger the tests again?
For the comment on range: Thanks for pointing it out. I overlooked this problem. I would like to create a separate PR to address it, along with range testing for other types as well.
retest this please
The failing test SparkPortableExecutionTest.testExecution should be unrelated to this change.
Run Java PreCommit
Run SQL Postcommit
Rebased against master. Please run precommit tests again.
retest this please
retest this please
Java PreCommit failed due to a build failure. Please help run again.
Run Java PreCommit
Run Java PreCommit
Run Java PreCommit
Something just occurred to me - are there any tests that use the DATE Type in an aggregation (e.g. MAX)?
I'd think that would run into the same issue I have in #11456
Interesting question. You should probably add a test for JOIN as well, which will have a similar class of problems.
are there any tests that use the DATE Type in an aggregation (e.g. MAX)?
No. Thanks for bringing this up. I think it is likely to run into the problem.
|
2025-04-01T06:37:51.899921
| 2020-07-24T06:32:54
|
664949338
|
{
"authors": [
"aromanenko-dev",
"ibzib",
"jayendra13",
"kennknowles",
"tysonjh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3506",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/12366"
}
|
gharchive/pull-request
|
[BEAM-10572] Eliminate nullability errors from :sdks:java:extensions:sql:datacatalog
Fixing the nullability issues for sub-module sdks:java:extensions:sql:datacatalog
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[x] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
Run SQL_Java11 PreCommit
Run SQL_Java11 PreCommit
Hi Jayendra, thanks for volunteering to fix these issues.
The SQL_Java11 precommit failure doesn't look like a flake, it seems like this change introduced an error. Please take a look.
05:09:22 > Task :sdks:java:extensions:sql:datacatalog:compileTestJava FAILED
05:09:22 error: option -Xbootclasspath/p: cannot be used together with --release
05:09:22 Usage: javac <options> <source files>
05:09:22 use --help for a list of possible options
Hi Jayendra, thanks for volunteering to fix these issues.
The SQL_Java11 precommit failure doesn't look like a flake, it seems like this change introduced an error. Please take a look.
05:09:22 > Task :sdks:java:extensions:sql:datacatalog:compileTestJava FAILED
05:09:22 error: option -Xbootclasspath/p: cannot be used together with --release
05:09:22 Usage: javac <options> <source files>
05:09:22 use --help for a list of possible options
When I run that task locally I don't get any error.
jayendra@alienware:~/beam$ ./gradlew :sdks:java:extensions:sql:datacatalog:compileTestJava
To honour the JVM settings for this build a new JVM will be forked. Please consider using the daemon: https://docs.gradle.org/5.2.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Configuration on demand is an incubating feature.
Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings
BUILD SUCCESSFUL in 42s
73 actionable tasks: 37 executed, 1 from cache, 35 up-to-date
is it something specific to java version ?
Ah, yes. To run the same commands as the Java11 builds you must set some properties. For example on mac:
git checkout github/pr/12366 # I have fetch spec set up like this
./gradlew \
-PcompileAndRunTestsWithJava11 \
-Pjava11home=/Library/Java/JavaVirtualMachines/jdk-11-latest/Contents/Home \
:sdks:java:extensions:sql:datacatalog:compileTestJava
I actually got a different failure: https://gradle.com/s/gegzdydsbu2si
Execution failed for task ':sdks:java:extensions:sql:datacatalog:compileTestJava'.
> release version 11 not supported
That seems a bit odd. I think I did not reproduce the problem properly or there are differences between my JDK11 and the one on Jenkins.
CC @tysonjh as we talked the other day about whether it mattered to make these unique Gradle targets versus just configuring these properties in the Jenkins job.
I expect this is a conflict between checkerframework and Java 11. Java 9+ are supported since 3.x but it seems it might rely on -Xbootclasspath/p option which has been removed. It may be that there is something specific to Gradle and how it uses a forked JVM in order to compile and run with Java 11.
Since checker is only needed for static analysis of the source, it can be disabled for Java 11 builds.
According to https://github.com/kelloggm/checkerframework-gradle-plugin/issues/43#issuecomment-551104939 it should be OK. I will submit a report or ask on the user list.
Ah, yes. To run the same commands as the Java11 builds you must set some properties. For example on mac:
git checkout github/pr/12366 # I have fetch spec set up like this
./gradlew \
-PcompileAndRunTestsWithJava11 \
-Pjava11home=/Library/Java/JavaVirtualMachines/jdk-11-latest/Contents/Home \
:sdks:java:extensions:sql:datacatalog:compileTestJava
I actually got a different failure: https://gradle.com/s/gegzdydsbu2si
Execution failed for task ':sdks:java:extensions:sql:datacatalog:compileTestJava'.
> release version 11 not supported
That seems a bit odd. I think I did not reproduce the problem properly or there are differences between my JDK11 and the one on Jenkins.
CC @tysonjh as we talked the other day about whether it mattered to make these unique Gradle targets versus just configuring these properties in the Jenkins job.
When I run with Java 11 I use the following (example from different task):
./gradlew clean
./gradlew -Dorg.gradle.java.home=/usr/local/buildtools/java/jdk11 :runners:direct-java:validatesRunner --scan
Tyson's command will use JDK11 also for the main gradle task, while still having source and target Java version 8. Since checker does support Java 11 and is aware of the removal of -Xbootclasspath/p perhaps that approach causes the plugin to configure flags appropriately.
Confirmed that Tyson's command worked.
Yup: https://github.com/kelloggm/checkerframework-gradle-plugin/blob/6739a86cf030ab35634a2b0ab6ac8859fe835473/src/main/groovy/org/checkerframework/gradle/plugin/CheckerFrameworkPlugin.groovy#L373
Filed kelloggm/checkerframework-gradle-plugin#117. We don't have to wait for a fix, though. We can just disable checker for Java 11 for now, and we will probably at some point switch to Tyson's invocation for most tests - compiling and running on JRE11 but with Java 8 settings.
Filed kelloggm/checkerframework-gradle-plugin#117. We don't have to wait for a fix, though. We can just disable checker for Java 11 for now, and we will probably at some point switch to Tyson's invocation for most tests - compiling and running on JRE11 but with Java 8 settings.
Disable checker(for java 11) for whole project or for just this package ?
Filed kelloggm/checkerframework-gradle-plugin#117. We don't have to wait for a fix, though. We can just disable checker for Java 11 for now, and we will probably at some point switch to Tyson's invocation for most tests - compiling and running on JRE11 but with Java 8 settings.
Disable checker(for java 11) for whole project or for just this package ?
Disable the checker for the whole project.
Can some one point me how to disable that. I know we have to modify some groovy script under .test-infra/jenkins, but don't know exactly where and how ?
Here: https://github.com/apache/beam/blob/de8ff705145cbbc41bea7750a0a5d3553924ab3a/buildSrc/src/main/groovy/org/apache/beam/gradle/BeamModulePlugin.groovy#L763
This block should be skipped if compileAndRunTestsWithJava11 is set and Gradle's configuration phase runs using JDK8. I think it would be fine to just skip that block whenever compileAndRunTestsWithJava11 is set.
It also looks like -PskipCheckerFramework is supported by kelloggm/checkerframework-gradle-plugin so you can just add that flag to the Jenkins job. That is probably best.
Run SQL_Java11 PreCommit
It also looks like -PskipCheckerFramework is supported by kelloggm/checkerframework-gradle-plugin so you can just add that flag to the Jenkins job. That is probably best.
It's was there so just triggered the job.
|
2025-04-01T06:37:51.937759
| 2021-01-07T19:30:21
|
781554351
|
{
"authors": [
"TheNeuralBit",
"kennknowles"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3507",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/13694"
}
|
gharchive/pull-request
|
Remove redundant or inappropriate pieces of the capability matrix
Each commit is an independent change that removes something that should not be in the capability matrix:
runners that are on branches and not released
features that are not designed and may even require inventing new things, or may be impossible
redundant columns
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[x] Choose reviewer(s) and mention them in a comment (R: @username).
[x] Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
[x] Update CHANGES.md with noteworthy changes.
[x] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.
http://apache-beam-website-pull-requests.storage.googleapis.com/13694/documentation/runners/capability-matrix/index.html
http://apache-beam-website-pull-requests.storage.googleapis.com/13694/documentation/runners/capability-matrix/index.html
@griscz wdyt?
@griscz wdyt?
Let me know when it is a good time to rebase and fix this one up.
the website revamp is done, you should be fine to rebase now.
Done. PTAL. Incidentally, the hack to add /index.html to all the URLs seems to be gone (needed to browse seemlessly on GCS staging).
Incidentally, the hack to add /index.html to all the URLs seems to be gone (needed to browse seemlessly on GCS staging).
I think this only happened on the runner detail links for some reason. I filed BEAM-11860 for it
|
2025-04-01T06:37:51.945543
| 2022-09-28T22:47:49
|
1390027142
|
{
"authors": [
"rohdesamuel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3508",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/23409"
}
|
gharchive/pull-request
|
Add documentation link to the interactive environment
Part of a documentation audit to add relevant gcloud docs links.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.
R: @KevinGG
|
2025-04-01T06:37:51.955345
| 2022-10-17T15:05:57
|
1411749349
|
{
"authors": [
"eantyshev",
"olehborysevych"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3509",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/23664"
}
|
gharchive/pull-request
|
[Playground] Examples CD
Examples CD optimization
Main feature comparing to the old workflow is running the examples on production backend. This ensures that examples really work after deployment, and saves resources on building a temporary runner
Introduce BEAM_USE_WEBGRPC variable in grpc_client.py. It enables WEBGRPC protocol instead of GRPC, to access public production endpoints.
Reusable workflow to optimize GH workflows
addresses #23463
addresses #23464
addresses #23465
See example CD run of this job with necessary amendments (private GCP credentials, push GH triggers)
python passes and go doesn't: known issue #23600
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.
lgtm
|
2025-04-01T06:37:51.963279
| 2023-12-19T16:49:48
|
2049051281
|
{
"authors": [
"damccorm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3510",
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/29821"
}
|
gharchive/pull-request
|
Build and publish multi-arch wheels separately from main wheels
Right now, building wheels is failing because it tries to upload an artifact for the multi-arch wheels with the same name as the regular wheels. Previously this worked silently and one was dropped. This fixes that issue by appending -aarch64 to the multi-arch wheels and makes sure that it gets uploaded to gcs as well.
Example succeeding now - https://github.com/apache/beam/actions/runs/7266076148
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.
R: @jrmccluskey
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.