added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:39.087815
| 2016-03-22T14:22:32
|
142665069
|
{
"authors": [
"taeguk"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11250",
"repo": "taeguk/vim-taeguk",
"url": "https://github.com/taeguk/vim-taeguk/issues/3"
}
|
gharchive/issue
|
Apply "conversion of spacebars from tabs"
Current, tabs in document are stored intactly.
But I think that conversion of spacebars from tabs is better than now.
So I will modify .vimrc
fixed.
|
2025-04-01T04:35:39.097055
| 2018-01-11T04:45:53
|
287662573
|
{
"authors": [
"tahnik",
"undef314"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11251",
"repo": "tahnik/devRantron",
"url": "https://github.com/tahnik/devRantron/pull/239"
}
|
gharchive/pull-request
|
Added snap support
Hi! I put together this PR to add a snap package. I've tested it on Ubuntu 16.04, but it should work just as well on Ubuntu 17.10, Ubuntu 14.04, Linux Mint, Manjaro, Debian, OpenSUSE, Solus, etc.
If you merge and npm run dist it will create dist/devrantron_1.5.0_amd64.snap.
Copy this to a Linux system, enable snap support, then run:
sudo snap install devrantron_1.5.0_amd64.snap --dangerous
Run with devrantron or find it in the launcher.
If you create a developer account and push this to the Snap Store, it can be discovered and installed through GNOME Software and https://snapcraft.io/discover. To create the developer account, sign up here, then register the "devrantron" name.
You'll need the snapcraft command to push the snap file to the store.
If you're on a Mac, you can brew install snapcraft
If you're on Linux, it's sudo snap install --classic snapcraft
Then you can push Ling out with:
snapcraft push dist/devrantron_1.5.0_amd64.snap --release stable
@undef314 this has been added, thanks :)
|
2025-04-01T04:35:39.102241
| 2024-08-07T17:28:56
|
2453982766
|
{
"authors": [
"1001v",
"nsbarsukov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11252",
"repo": "taiga-family/maskito",
"url": "https://github.com/taiga-family/maskito/issues/1437"
}
|
gharchive/issue
|
🚀 - add plugins option to maskitoPhoneOptionsGenerator
Which package(s) are relevant/related to the feature request?
@maskito/phone
Description
Hello. I would like to have the plugins field for maskitoPhoneOptionsGenerator options for passing extra plugins.
For instance, I need maskitoInitialCalibrationPlugin alongside with maskitoPhoneOptionsGenerator. Of course I could just push a plugin to the MaskitoOptions object, but it's not very convinient since MaskitoOptions['plugins'] is a readonly array.
Thanks.
@1001v Hello!
Adding additional plugins to any built-in mask is easy and does not required adding of additional properties.
const phoneOptions = maskitoPhoneOptionsGenerator({
metadata,
countryIsoCode: 'US',
});
const upgradedPhoneOptions: MaskitoOptions = {
...phoneOptions,
plugins: [
...phoneOptions.plugins,
maskitoInitialCalibrationPlugin()
],
};
See another example:
https://maskito.dev/addons/phone#focus-blur
@nsbarsukov, Hello. Thanks for pointing this out.
|
2025-04-01T04:35:39.110996
| 2023-09-13T15:06:47
|
1894738848
|
{
"authors": [
"adaki2004",
"dantaik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11253",
"repo": "taikoxyz/taiko-mono",
"url": "https://github.com/taikoxyz/taiko-mono/pull/14689"
}
|
gharchive/pull-request
|
feat(protocol): Initial draft of hybrid rollup
Currently draft and needs a LOT of refinement, just opened as a progress. Needs a code walk-through and meeting with Daniel to align more.
The whole idea is about abstraction, in the code, we now have different proofs at different tiers, they may be optmistic or ZK, but in the core protocol, we don't use OP or ZK. To config each tier, we implement methods such as getXXXForTier(...)
|
2025-04-01T04:35:39.116124
| 2024-03-10T12:58:53
|
2177707470
|
{
"authors": [
"ssddOnTop",
"tusharmath"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11254",
"repo": "tailcallhq/tailcall",
"url": "https://github.com/tailcallhq/tailcall/pull/1362"
}
|
gharchive/pull-request
|
chore: drop compose command
Summary:
Briefly describe the changes made in this PR.
Issue Reference(s):
Fixes #1357
/claim 1357
Build & Testing:
[ ] I ran cargo test successfully.
[ ] I have run ./lint.sh --mode=fix to fix all linting issues raised by ./lint.sh --mode=check.
Checklist:
[ ] I have added relevant unit & integration tests.
[ ] I have updated the documentation accordingly.
[ ] I have performed a self-review of my code.
[ ] PR follows the naming convention of <type>(<optional scope>): <title>
Please update the docs by dropping the command from here https://tailcall.run/docs/guides/cli/#compose
Please update the docs by dropping the command from here https://tailcall.run/docs/guides/cli/#compose
https://github.com/tailcallhq/tailcallhq.github.io/pull/144
|
2025-04-01T04:35:39.122838
| 2024-06-04T17:35:02
|
2334045629
|
{
"authors": [
"aaomidi",
"ebarriosjr",
"ericpollmann",
"henworth",
"khernandezrt",
"talha5389-teraception"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11255",
"repo": "tailscale/github-action",
"url": "https://github.com/tailscale/github-action/issues/130"
}
|
gharchive/issue
|
Tailscale step runs successfully but subsequent steps to connect to DB fail
We created the correct tags and set the scope to device.
The step for Tailscale runs(i dont see any confirmations that we are connected) but the step to run my tests fail with
ERROR tests/mycode/code/test_my_code.py - sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysqlserver.us-east-1.rds.amazonaws.com' (timed out)")
We also see the node being created on the Tailscale UI but i keep getting a timeout when I run pytest.
name: Python application
on:
push:
branches: [ "feature/github-actions" ]
pull_request:
branches: [ "feature/github-actions" ]
env:
AWS_CONFIG_FILE: .github/workflows/aws_config
DB_NAME: "mydbname"
DB_READ_SERVER: "mysqlserver.us-east-1.rds.amazonaws.com"
DB_USERNAME: "root"
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
AWS_PROFILE: "dev"
API_VERSION: "v1"
FRONT_END_KEY: ${{secrets.FRONT_END_KEY}}
LOG_LEVEL: "INFO"
DB_USER_ID: 32
SENTRY_SAMPLE_RATE: 1
NUMEXPR_MAX_THREADS: "8"
LOG_LEVEL_CONSOLE: True
LOG_LEVEL_ALGORITHM: "INFO"
LOG_LEVEL_DB: "WARNING"
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Tailscale
uses: tailscale/github-action@v2
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:cicd
- uses: actions/checkout@v4
- name: Set up Python 3.12
uses: actions/setup-python@v3
with:
python-version: "3.12"
- name: Install dependencies
run: |
pip install -r requirements-dev.txt
- name: Test with pytest
env:
PYTHONPATH: ${{github.workspace}}/src
run: |
pytest
Switching the URL to a direct IP did the trick. Looks like a DNS issue.
I will leave this issue open as id prefer not to use a direct IP.
I'm encountering a similar timeout error, although doesn't seem to be DNS in my case as the IP is resolved properly:
Error: Error connecting to PostgreSQL server database.us-east-1.rds.amazonaws.com (scheme: awspostgres): dial tcp correct.ip.address:5432: connect: connection timed out
@henworth Have you setup your security policies correctly for your Tailscale instance?
@henworth Have you setup your security policies correctly for your Tailscale instance?
Yep, I've done all this. It was working fine and now I'm not sure what's wrong.
I also started having issues 2 weeks ago. I have also verified that things works fine outside of github actions using same configuration
I am having the same issue. It has been working perfectly so far but today I get random i/o timeouts.
Same here! I had random failures especially on the first connection to our RDS instance (running in aws) from a github action worker (running in Azure). I did some debugging and found that the connection is going through DERP despite having inbound wireguard port for IPv4/v6 on the aws side.
I changed our use to first run a single ping to the subnet router DNS hostname after bringing up tailscale and that seemed to dramatically improve reliability though still had 1 fail in 10 (that time it was the ping itself failing)
Set up Split DNS and haven't had a failure since then, though only have had 10 or so runs since then.
I wonder if there's a propagation delay here? E.g. a new node comes up but doesn't propagate fast enough.
The stateful filtering is interesting, but it's disabled by default it seems.
@henworth can you describe what flags you changed? I think I'm seeing something similar to this but in the helm world this time.
At the time I wrote that comment the default was true, it has since been changed to false in a subsequent release.
|
2025-04-01T04:35:39.130907
| 2019-02-05T18:23:19
|
406913897
|
{
"authors": [
"adamwathan",
"hacknug"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11256",
"repo": "tailwindcss/tailwindcss",
"url": "https://github.com/tailwindcss/tailwindcss/pull/643"
}
|
gharchive/pull-request
|
Inherit background, text, and border colors from theme colors at merge level
This is an alternative to #639 which I think may be a better solution (or maybe not!)
Instead of each plugin internally knowing how to find the right values it should use from the theme, this PR pushes that logic to the existing configuration merging layer, so that there is always a "complete" config at the end of the day, instead of a config that is completely missing the backgroundColors, borderColors, and textColors keys.
Essentially this flattens/freezes any sort of inherited value at the merge layer, so that each plugin simply receives it's configuration directly.
There are pros and cons to both approaches annoyingly, but at least today I am convinced this is the better approach.
My main argument for it internally is that I foresee a future where one day I split out all of the "framework generation" code from Tailwind into another project, like maybe tailwindcss/engine, and that project is a PostCSS plugin that only deals with plugins and has no concept of a default theme or default styles.
If that project existed, I would want to be able to use the utility plugins from Tailwind as plugins to the engine without there ever being weird errors about things like "key 'theme' not found" because the plugins are reaching up to the config looking for their values. If every plugin is configured explicitly from the outside in, they would all be straightforward to use in the engine context.
The only real con to this approach is that this merge layer could continue to grow and get more complicated if we introduce other "magic" shared keys for things like spacing or sizing. There is something admittedly nice about keeping all of the fallback/inheritance logic for a specific plugin localized within that plugin. Hard call.
spacing and sizing 🎉
Closing in favor of #645.
|
2025-04-01T04:35:39.139055
| 2023-10-23T23:24:34
|
1958192443
|
{
"authors": [
"mikowals",
"tairov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11257",
"repo": "tairov/llama2.mojo",
"url": "https://github.com/tairov/llama2.mojo/issues/60"
}
|
gharchive/issue
|
Turn on discussions
It might be worth turning on discussions. It would be helpful to discuss performance improvements so there is a history of what people have tried and any benchmarks run.
Done!
|
2025-04-01T04:35:39.154415
| 2024-10-29T13:40:31
|
2621297149
|
{
"authors": [
"ArcaNO93",
"takahirom"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11258",
"repo": "takahirom/roborazzi",
"url": "https://github.com/takahirom/roborazzi/issues/522"
}
|
gharchive/issue
|
Different captured image format
Hello there
Does Roborazzi support capturing\comparing in other formats rather than PNG (i have a particular interest in webp since it is lighter)?
Thanks
We might need to use the lossless format of WebP if we implement it. We should investigate whether it's compatible with platforms like Mac, Windows, and Linux.
For now, you can try using RGB565 for size reduction.
https://github.com/takahirom/roborazzi/blob/db6faacbcfbff7954cdd43741606bbc6e5bf53d3/include-build/roborazzi-core/src/commonJvmMain/kotlin/com/github/takahirom/roborazzi/RoborazziOptions.kt#L202
ok thanks
i guess for now its possible just to convert everything from png to webp before comparing and after recording via custom gradle task
I am working on this.
yeah id say lossless would do
I'm considering releasing a feature that allows users to save images in their preferred format. However, there are some challenges. Roborazzi compares a new image with a previously saved one, which means that if the saved image is compressed (e.g., in a format like WebP), image differences might occur, as I mentioned. You can suppress these differences by using high maxDistance values, along with hShift and vShift, to manage this. If you prefer a lossless format, you can provide your own image writer through IIORegistry, a feature in Java, although this approach might be slightly complex.
Another way to address this issue is by offering save BufferedImage and load BufferedImage listeners to users. However, this is a JVM-specific approach, which could be implemented as a platform-specific class within the platform source set like this.
For now, I’m leaning towards implementing the feature that allows users to save images in their preferred format, as I initially mentioned.
I apologize for frequently changing my opinion 😅. I will support lossless WebP, and you can customize both the image writer and loader as previously mentioned.
https://github.com/takahirom/roborazzi/pull/529
@ArcaNO93 If you have time, could you please review this PR?
https://github.com/takahirom/roborazzi/pull/529
oh, ofc
left one comment @takahirom
|
2025-04-01T04:35:39.235852
| 2015-06-11T01:37:05
|
87169193
|
{
"authors": [
"AlexKnauth",
"takikawa"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11259",
"repo": "takikawa/racket-clojure",
"url": "https://github.com/takikawa/racket-clojure/pull/12"
}
|
gharchive/pull-request
|
make [] read as a vector
and make sure [(+ 1 2)] evaluates equal to (vector-immutable 3), and
clean up no-longer-necessary handling of [ ‘paren-shape and unquote in
#%app, quote, let, etc.
fixes https://github.com/takikawa/racket-clojure/issues/1
Thanks. I'm happy to merge this, but it says it can't auto-merge. I just added you as a collaborator so feel free to merge yourself.
Does this look good to you?
Actually I just realized that it is broken because it would read [[]] as a vector containing a list, same goes for any square brackets within vectors so I'll fix that first.
Okay Now does this look good to you?
|
2025-04-01T04:35:39.248481
| 2019-06-05T11:54:38
|
452460262
|
{
"authors": [
"mallik961",
"talalmajali"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11260",
"repo": "talalmajali/react-native-countdown-component",
"url": "https://github.com/talalmajali/react-native-countdown-component/issues/46"
}
|
gharchive/issue
|
react native countdown
seconds min boxes show only single digit and not the second digit
@mallik961 what's the device name? and are you using custom styling for the component?
@mallik961 check this issue please, https://github.com/talalmajali/react-native-countdown-component/issues/49
Hi Talalmajali/React-Native-Countdown-Component
I need to use countdown in my application i am using life cycles methods
and complex coding for simple use case i need a timer counter and i need to
change its speed and reset when i need but the component doesn't doing all
these things
On Sun, 6 Oct, 2019, 15:08 Talal Majali<EMAIL_ADDRESS>wrote:
@mallik961 https://github.com/mallik961 check this issue please, #49
https://github.com/talalmajali/react-native-countdown-component/issues/49
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/talalmajali/react-native-countdown-component/issues/46?email_source=notifications&email_token=AJSZCPKXZTGPMXWE6FNC25DQNGWZVA5CNFSM4HTWHGQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAOFPLQ#issuecomment-538728366,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJSZCPNO3X3FSXGT6QETKH3QNGWZVANCNFSM4HTWHGQQ
.
|
2025-04-01T04:35:39.271161
| 2021-04-22T01:45:20
|
864462308
|
{
"authors": [
"keeganwitt",
"tambapps"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11261",
"repo": "tambapps/groovy-shell-user-manual",
"url": "https://github.com/tambapps/groovy-shell-user-manual/issues/1"
}
|
gharchive/issue
|
Should specify the license
It's unclear what the license of this source is. As a general practice, add a file called LICENSE with the license text to the repository root directory.
It's a best practice, yes. I think there's two areas you are covering here. First, what (if any) permissions are needed to quote the text. Second, if someone wants to use part of a code sample as part of something else what are the rules around that?
Here are some examples of other documentation repositories that have a LICENSE (or COPYING) file. Some of them also have a CONTRIUBUTING and/or README file to describe how the can contribute to the documentation and how they can view/test rendered text changes (if applicable).
https://github.com/dotnet/docs
https://github.com/dotnet/AspNetCore.Docs
https://github.com/raspberrypi/documentation
https://github.com/tensorflow/docs
https://github.com/cypress-io/cypress-documentation
https://github.com/kubernetes/website
https://github.com/purescript/documentation
https://github.com/DataDog/documentation
https://github.com/auth0/docs
https://github.com/nextcloud/documentation
https://github.com/opencollective/documentation
Oh, ok I see. I've just added a LICENSE.
Thanks for the explanation!
|
2025-04-01T04:35:39.288154
| 2017-05-03T19:10:05
|
226085321
|
{
"authors": [
"alejan2x",
"neliobnjr",
"tananaev"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11262",
"repo": "tananaev/traccar",
"url": "https://github.com/tananaev/traccar/issues/3145"
}
|
gharchive/issue
|
tk103 not working with python
I have my gps TK103 , I configured for sms to can work via GPRS. actually work perfectly with different pages of services of tracker as it is: Orage, gpswebtracker or gpstrackerxy. Also I have my own server , this server have Static IP Public with port open also I have installed in this sever the software call "GPS Tracker" and work perfectly
The problem is that I am developing my script in python to have communication from mi server to gps. I followed the next steps:
change IP and port of my gps to can connect to my server
open the connection btw gps and server
the gps send me : ##,imei:123456789012345,A;
anwer from my sever: LOAD (format text)
here begin the problem
5) the gps send me: imei:123456789012345,tracker,,,L,,,97f,,77c7,,,;
anwer from my sever: **,imei:123456789012345,B; (format Text)
the gps send me: imei:123456789012345,tracker,,,L,,,97f,,77c7,,,;
I repeat the step but with hex
the gps send me: imei:123456789012345,tracker,,,L,,,97f,,77c7,,,;
anwer from my sever: 2a2a2c696d65693a3132333435363738393031323334352c42 (format HEX)
the gps send me: imei:123456789012345,tracker,,,L,,,97f,,77c7,,,;
also with:
the gps send me: imei:123456789012345,tracker,,,L,,,97f,,77c7,,,;
anwer from my sever: **,imei:123456789012345,42; (format Text and HEX)
the gps send me: imei:123456789012345,tracker,,,L,,,97f,,77c7,,,;
and the manual traccar say: use 12-digit device identifier instead of IMEI. Usually it consists of 11 last digits from IMEI plus leading zero. Whyyyyyy??
So really I dont have idea that why not work my scripting or why not can capture the data the coordinates of my gps. really a Im desperate.
Regards!
here my script in python
#!/usr/bin/python
_author__ = 'Alejandro hernandez'
import socket
import sys
import time
import re
import os
os.system('cls')
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect the socket to the port where the server is listening
localhost = "<IP_ADDRESS>"
port = int("6000")
server_address = (localhost,port)
def echo_server():
imei = []
print >>sys.stderr, 'starting up on %s port %s' % server_address
sock.bind(server_address)
sock.listen(1)
print >>sys.stderr, 'waiting for a connection'
connection, client_address = sock.accept()
print >>sys.stderr, 'client connected:', client_address
data = connection.recv(1024)
if data.find('##') > -1:
imei.append(data)
print >>sys.stderr, 'received %s' % data
for line in imei:
imeidevice = re.sub("\D",'', line)
id = '0%s' %imeidevice[4:15]
print >>sys.stderr, 'sending msg to client: LOAD'
message = 'LOAD'
connection.sendall(message)
while True:
ansdev = connection.recv(1024)
print >>sys.stderr, 'received %s' % ansdev
if len(ansdev) == 16:
#message = '**,imei:%s,B;' %id
message = 'ON'
print >>sys.stderr, 'sending msg to client: %s' %message
connection.sendall(message)
ansdev = connection.recv(1024)
print >>sys.stderr, 'received %s' % ansdev
connection.close()
def main():
echo_server()
if __name__ == '__main__':
main()
attach images of data trafic
The problem is that your device doesn't seem to have a GPS fix. Are you testing it indoors?
No, the gps is in the street and no exist building Or something that interferes with the signal. But i dont understand, I have the same environment in the gps but with the web services (gpstracker u orange) work good perfectly, is more in my own server with the aplication 'gps tracker' have good results, but i try with python and can not. I haver the same trafic in the TCP ports such input and output, attached the images y the description cases.
All I can tell from your long post is that your device doesn't send any GPS data. The problem must be either lack of GPS signal, or some device configuration issue.
I checked the connection that technical did in the GPS in the motorcycle andd puff , him installed bad, the Athena GPS was in the gsm and viceversa. I changed the wire and just now received the coordinates perfectly, sorry for the inconvenience.
imei:0123456789012345,tracker,170504035029,,F,225027.000,A,2039.2726,N,10324.1718,W,13.45,73.95;
imei:0123456789012345,tracker,170504035032,,F,225030.000,A,2039.2807,N,10324.1637,W,13.93,30.57;
imei:0123456789012345,tracker,170504035032,,F,225030.000,A,2039.2807,N,10324.1637,W,13.93,30.57;
imei:0123456789012345,tracker,,,L,,,97f,,77c7,,,;
Hey Alejan, I'm trying to use your script but it is now working on python3.7.
which version were you working?
|
2025-04-01T04:35:39.295225
| 2017-11-08T11:08:35
|
272159548
|
{
"authors": [
"Abyss777",
"renaudallard",
"tananaev"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11263",
"repo": "tananaev/traccar",
"url": "https://github.com/tananaev/traccar/pull/3632"
}
|
gharchive/pull-request
|
Retry geocoding for trips/stops
In some cases it is not necessary to revers geocoding every position. It might be enough find out addresses of trips endpoints for reports and allow user check address on demand from web client.
Moved AddressFormat inside Geocoder
Implement synchronous geocoding
Implement retry geocoding for trips/stops reports
Implement API for revers geocoding
if geocoder.enable - true - it will try for all positions, if report.retryGeocoding - true it will retry find out addresses of trips and stops. If both are false, geocoder is disabled.
I have an initial implementation for web client
more comments in code...
I've experimented with locales in GeocoderTest
It fails with Locale.ENGLISH but pass with Locale.UK and Locale.US
My developer machine is:
$ cat /etc/lsb-release
DISTRIB_ID=neon
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="KDE neon User Edition 5.11"
$ java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu<IP_ADDRESS>-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
I think we should use Locale.US.
I've combined getAddress logic in one function, if callback is null we try to do synchronous request.
Changes switches logic as we discussed
Merged, thanks.
Thanks.
I'm also thinking of implementation retry geocoding every position related to events.
It will allow switch off each position geocoding, but still get addresses in notifications.
What do you think?
Sounds good to me. What about the web part?
I'll send PR on Monday
From a end user perspective, I think it would be better if it queries the address directly when the user focus on the device (or point in route/trips) instead of putting a link requesting the address. That way, you still save most of the requests, but you can have addresses in follow mode and you are not requested to click anywhere to show the address. So the end user doesn't see any difference in the web interface, but you still save queries.
@renaudallard, please create a new thread instead of commenting on closed pull request.
|
2025-04-01T04:35:39.297206
| 2024-11-27T01:01:26
|
2696596190
|
{
"authors": [
"cbolinius",
"tanaya2026"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11264",
"repo": "tanaya2026/GROUP_175_207_PROJECT",
"url": "https://github.com/tanaya2026/GROUP_175_207_PROJECT/pull/43"
}
|
gharchive/pull-request
|
Rectified Dependency Inversion Principle violation in User entity with dependency injection of the SlotifyServiceInterface. DataAccessObject now implements SlotifyServiceInterface and overrides the API call methods. Cleaned up DataAccessObject order and moved huge list of constants related to Slotify into SlotifyServiceInterface. Deleted several files we no longer need, including use case factories and DONOTUSEDAO. Deleted Availability class - it was never used since it was easier and more adherent to CA to use dependency injection to call the fetchAvailability method from the DAO.
I also added comments in several places to clarify and provide better technical documentation.
Tanaya - with the refactored User constructor taking the extra SlotifyServiceInteractor parameter, you likely have to modify any User instance creation in the Signup use case.
Great work Cooper! Great job on implementing SOLID principles and CA architecture.
Will pull your work, and make changes in the SignUpUseCase, now that you have modified the User entity.
|
2025-04-01T04:35:39.357566
| 2012-01-25T11:27:55
|
2963708
|
{
"authors": [
"PavelVavruska",
"codeinthehole"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11265",
"repo": "tangentlabs/django-oscar",
"url": "https://github.com/tangentlabs/django-oscar/issues/92"
}
|
gharchive/issue
|
Added an anonymous message sender
Please merge this version.
It is not tested.
modified: oscar/apps/checkout/views.py
modified: oscar/apps/customer/utils.py
modified: oscar/apps/order/abstract_models.py
Thank you,
Pavel
Ignoring this one as it has lots of changes that shouldn't be there.
|
2025-04-01T04:35:39.379734
| 2017-01-13T00:16:13
|
200516423
|
{
"authors": [
"Guillaume227",
"matteblair"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11266",
"repo": "tangrams/tangram-es",
"url": "https://github.com/tangrams/tangram-es/issues/1227"
}
|
gharchive/issue
|
Crash in rendering thread when adding multiple markers
I have been playing with code that adds a few dozens markers in a loop in one function call and I am getting very frequent crashes in the GL thread which is attempting to render partially set markers
(I am not talking about different threads concurrently adding markers).
I haven't digged very deep yet but based on the Android example code (and the c++ interface code below) I haven't seen any mutex I should lock when creating markers.
Is that a known limitation ?
I have seen that @matteblair comment in the PR in which markers where introduced back in september which might allude to what's causing my problem:
I've kind of punted on synchronization for markers at present. If we care about thread safety here then there's a lot we'll need to guard - so let's make that a follow-up task.
Can you share the code that's producing these crashes (particularly the c++ interface)?
I understood what's going on in my code:
I am adding a few dozens markers then wipe them out and add a new batch.
It's the wiping out that's causing the trouble with the render thread: as that thread iterates over std::unique_ptr some have been removed by my application thread and became null.
I see there is not mutex for markers like there is for tiles to protect access to markerManager.markers().
So it's crashing in places like this (tangram.cpp, around line 490)
```
for (const auto& marker : impl->markerManager.markers()) {
if(marker)
style->draw(impl->renderState, *marker);
}
And in places where we check the ease etc.
Ok I think I understand. In our Android interface, most application logic happens on the UI thread while rendering happens on the GL thread. To guard against concurrent access, most of our JNI methods are designated synchronized. For other platform targets, there's no need to incur those synchronization costs so we don't perform synchronization inside the core library. Tile sets, however, must be synchronized in the core library because we access them from worker threads.
For Android applications that don't go through the Java interface we provide, you would need to provide your own synchronization mechanism to prevent concurrent access from the UI thread and GL thread.
Does this answer your questions?
Makes sense, thanks! I am going to add synchronization on my end then.
I feel this should really be pushed down to the c++ API though as it's a cross platform concern.
Maybe it will get introduced as part of #1034 then.
|
2025-04-01T04:35:39.387734
| 2016-11-15T04:51:54
|
189299517
|
{
"authors": [
"bcamper",
"blair1618",
"hjanetzek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11267",
"repo": "tangrams/tangram-es",
"url": "https://github.com/tangrams/tangram-es/pull/1089"
}
|
gharchive/pull-request
|
Prevent 'global' references from being resolved as URLs
Currently, if a global. reference is used in a scene node that represents a URL, that reference will be resolved as a relative URL and not replaced with the appropriate global value.
This change makes global. a reserved prefix that will prevent values from being treated like URLs. This means that global values will now behave as documented for values that expect URLs.
This also implies that you cannot use global.png or similar file names/paths. If you really need to call your file "global.png" then you can use the equivalent path ./global.png
What about scene import?
@bcamper Gooood question - I'm not sure how to handle that. I don't think we can allow global. references in import statements because imported paths are used before global.s can be replaced.
Oh hehe right, makes sense. Good policy to stick to then.
On Tue, Nov 15, 2016 at 10:43 AM, Matt Blair<EMAIL_ADDRESS>wrote:
@bcamper https://github.com/bcamper Gooood question - I'm not sure how
to handle that. I don't think we can allow global. references in import
statements because imported paths are used before global.s can be
replaced.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/tangrams/tangram-es/pull/1089#issuecomment-260676525,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AABBXYLV1vGg6xyAtRjt6uWxedrv3S_Wks5q-dMmgaJpZM4KyIE3
.
@hjanetzek Does this look alright to you?
@blair1618 can't global.png be a path unless global: {png: ... } is defined? This would be the same logic as for global textures.
@hjanetzek The trick is that we don't know the total set of values in global: {} until the merging is done, so we can't assume that a value won't be defined by a "parent" scene.
I see - so one would have to make a first pass to get all global blocks before merging imports (and resolving URLs).
I'm ok with merging this. We can consider some special cases or the import logic later.
Oh yeah that's possible - we can iterate over the DAG of imports and generate the total set of global: values before actually merging the scenes, then check potential URLs against that set before resolving them. Let's keep that in mind in case this becomes a problem in practice!
|
2025-04-01T04:35:39.417929
| 2024-02-26T15:54:52
|
2154516114
|
{
"authors": [
"Orlando-c",
"tanishapatil1234"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11268",
"repo": "tanishapatil1234/tri2",
"url": "https://github.com/tanishapatil1234/tri2/issues/7"
}
|
gharchive/issue
|
Incorporation into Project - 2015 FRQS
I focused on Question 1 part A here, because it was especially relevant to the part of the project I tackled... collisions , mapping, and animations in JAVA...
FRQ 1)
Write a static method arraySum that calculates and returns the sum of the entries in a specified one-dimensional array. The following example shows an array arr1 and the value returned by a call to arraySum.
In our games backend, we have created an array collisions detailing the presence of collisions on the game map. Creating a sum of these collisions, and then dividing by 1025 will give us the number of collision areas at one time. To perform this, we will have to write a method arraySum, to return the sum of the entries of a one-dimensional array, arr.
Integration into project is seen in this class :
In repo itself, but in java script (because collisions stuff I have to convert to be a web game not app)
FRQ 2)
FRQ 2 focused on classes and instance variable initialization. For this I chose to focus on the NPC/PLAYER classes that I created in our project where I initialized variables similar to the ones prompted in the frq.
Class Definition:
The class Npc extends Entity and implements MouseListener.
This means that Npc inherits from Entity and also defines behavior for handling mouse events through MouseListener.
Instance Variables:
npcImage: An Image object representing the image of the NPC.
speechText: A String representing the text that the NPC will speak.
displaySpeech: A Boolean variable indicating whether the NPC's speech should be displayed.
gamePanel: A reference to a GamePanel object. This is likely the panel where the NPC will be displayed and interacted with.
The constructor initializes the state of an Npc object.
Parameters:
x and y: Initial coordinates of the NPC.
speechText: The text that the NPC will speak.
gamePanel: A reference to the GamePanel where the NPC will be displayed.
We have setters which sets the initial values for x, y, speechText, and displaySpeech.
FRQ 3)
This FRQ focused on manipulating and accessing 2D arrays
In this code example from my part of the project, I used 2D arrays to detail attack types. From here I wrote functions for testing purposes that iterated through the array to print each attack type and its corresponding information and I wrote a function to changeAttackType. When I was doing this fRQ I was able to come up with the solution because of the work I did in the gameWindow files regarding changing attack types.
FRQ 4
This FRQ was about interfaces, and implementing it in separate classes. This had definite applications in our project. When I was creating the individual characters in the game, I started with a baseline class called Entity.java.
As you can see, entity has the basic common instance variables common throughout all characters : location on the map (x, y) and speed
Then I went on to create the main player class by extending Entity. I added some unique variables such as current sprite frame. This is because the player is meant to move along the screen and implement the Key handler methods I wrote, so I needed to keep track of the sprite movements to do smooth animations.
Finally NPC class, another extension of Entity, has its own variables displaySpeech. This is because NPC have dialogue in our game, whereas no other characters do.
Reflection
FRQ 1:
Implementing the arraySum method was relatively straightforward. It involves iterating through the one-dimensional array and summing up all its elements. Writing the rowSums method was simple as it was just iterating through each row of the two-dimensional array and calculating the sum of each row. I forgot to save them in a 1d array at the end though, so next time I will read the instructions more clearly. I had trouble with c because I wrote the for loop condition itself incorrectly (syntax wise). I will review manipulation of 2d arrays for the future.
FRQ 2:
This one was my personal favorite. It took a while for me to fully understand what the problem met, but the visual was helpful.
First, I created the logic for generating hints based on the comparison between the hidden word and the guess. I knew a for loop was needed to iterate through each letter. I immediately identified that 3 conditionals were required within the for loop (if, else if, else) to address the three conditions of : letter present, letter present in wrong place, and letter not present at all.
I designed the loop, allowing players to make guesses and receive hints until they either correctly guess the hidden word or reach a maximum number of attempts.
Overall, this frq was really fun to do. It was just challenging to understand at first. Sort of like wordle!
FRQ 3:
Writing the SparseArrayEntry class was straightforward.Implementing the constructor and instance variables was also relatively simple. But I also had to ensure that the object cannot be modified after construction which I didn't see the first time I did this FRQ.
The removeColumn part took the longest time for me to create. This is because I forgot the .size() , so I was trying to figure out a way to limit the loop without it. Eventually, I remembered the ,size() but it was a reminder to me to review some of the builtis.
I was a little confused on the second bullet of the last part, the else if . Are we replacing the new col with the contents of the one before it? Are we deleting the one before it so is the new col just replacing the old one? Or will the duplicates
exist?
Overall, working on this question provided a good opportunity to apply data structure concepts to a practical problem.
FRQ 4:
Creating the NumberGroup interface was easy enough. Once I wrapped my head around what the questopn was actually asking , designing the contains method afterwards also felt relatively straightforward. Interface is just the groudwork.
Implementing the Range class was a bit more challenging. When ensuring that the range includes all integers within the specified bounds, I got the constructor and instance variables sorted out. Additionally I wrote the contains method here but I do not know if I should have.
Writing the contains method for MultipleGroups was easy as well. This was just the implementation of an enhanced for loop, and the method I wrote previously. Additionally this line is important to understand (within the for ) :
for (NumberGroup group : groupList)
Overall, this FRQ went pretty well. This is because I had experience with interfaces, and class extension with my character classes.
FRQ
FRQ 1
General: FRQ 1 is well done, and examples demonstrate a good understanding of java arrays/arraylists and 2D arrays
(a): Code correctly creates the arraysum method and shows knowledge of for loops which can be seen here
public static int arraySum(int[] arr) {
int sum = 0;
for (int i : arr) {
sum += i;
}
return sum;
(b): Represents a good understanding of using the prior written methods in the part (a) and a good understanding of the dynamics of for loops and indexing arrays and their interactions with each other
public static int[] rowSums(int[][] arr2D) {
int[] sums = new int[arr2D.length]; // One entry for each row
for (int i = 0; i < arr2D.length; i++) {
sums[i] = arraySum(arr2D[i]); // Utilizing the arraySum method
}
return sums;
}
(c): Great showcase of nested loops with being able to compare with the next elements in the list and properly return a value for whether or not the array was diverse or not
public static boolean isDiverse(int[][] arr2D) {
int[] rowSums = rowSums(arr2D);
for (int i = 0; i < rowSums.length; i++) {
for (int j = i + 1; j < rowSums.length; j++) {
if (rowSums[i] == rowSums[j]) {
return false;
}
}
}
return true;
}
Reflection: In Tanisha's reflection, she provides a good example from her project which is a collisions array which determines the number of collision areas. This demonstrates proficient understanding of 2D arrays and arraylists as well as the correlation between the 2D FRQ and their importance in java problem and personal based learning.
FRQ Grade: .9/.9
Reflection Grade: .9/.9
FRQ 2
General: FRQ 2 is very well done and displays the process in which the user checks the word to see if it's correct. It shows a great understanding of classes and for loops
Question: The FRQ is asking to make a guessing game similar to wordle or hangman where the goal is to guess the correct word. The output should display your guess and the correct or incorrect letters in the word you guessed until you guess the word.
public String getHint(String guess) {
// Initialize String for hint
String hint = "";
for (int i = 0; i < guess.length(); i++) {
if (guess.substring(i,i+1).equals(word.substring(i, i+1))) {
hint += guess.substring(i, i+1);
} else if (hiddenWord.indexOf(guess.substring(i, i+1)) != -1) {
hint += "+";
} else {
hint += "*";
}
}
return hint.toString();
}
Reflection: I like how Tanisha provides a great example as to how their project correlates to this FRQ and
FRQ Grade: .9/.9
Reflection Grade: .9/.9
FRQ 3
General
Reflection
FRQ Grade: TBD/.9
Reflection Grade: TBD/.9
FRQ 4
General
Reflection
FRQ Grade: TBD/.9
Reflection Grade: TBD/.9
Total: TBD/3.6
|
2025-04-01T04:35:39.431125
| 2016-05-21T20:25:33
|
156119972
|
{
"authors": [
"femtotrader",
"tanmaykm"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11269",
"repo": "tanmaykm/JuliaTS.jl",
"url": "https://github.com/tanmaykm/JuliaTS.jl/issues/4"
}
|
gharchive/issue
|
TArray constructor from DataFrame (DataFrames.jl) and from TimeArray (TimeSeries.jl)
Hello,
it will be nice if you could tell me how to construct a TArray from DataFrame (DataFrames.jl) and from TimeArray (TimeSeries.jl) because I'm considering adding JuliaTS.jl support to https://github.com/femtotrader/TALib.jl/issues/6
Here is some code to get a sample DataFrame
Download sample data https://github.com/femtotrader/TALib.jl/blob/master/test/ford_2012.csv
using DataFrames
filename = "test/ford_2012.csv"
dfOHLCV = readtable(filename)
dfOHLCV[:Date] = Date(dfOHLCV[:Date])
and for a sample TimeArray
using TimeSeries
taOHLCV = readtimearray(filename)
Maybe such constructors could be add to JuliaTS.jl (without adding these package as dependencies) ?
Kind regards
Constructing TArray from DataFrame or TimeArray would be something like:
using DataFrames
using JuliaTS
using TimeSeries
# read as dataframe
dfOHLCV = readtable("ford_2012.csv");
dfOHLCV[:Date] = Date(dfOHLCV[:Date]);
# read as timeseries
tsOHLCV = readtimearray("ford_2012.csv");
# dataframe to TArray
ta = TArray((:Date,), [n=>dfOHLCV[n] for n in names(dfOHLCV)]...)
# timeseries to TArray
ta = TArray((:Date,), :Date=>tsOHLCV.timestamp, [symbol(n)=>tsOHLCV[n].values for n in colnames(tsOHLCV)]...)
May be Requires.jl will help adding such conversion functions without explicit package dependencies.
@femtotrader, what do you think of an alternate interface for timeseries as in this notebook here: https://github.com/tanmaykm/notebooks/blob/master/stocks/demo2.ipynb ?
It is somewhat similar to python xarray. The backing array can be made to support NDSparseData. Is this a more convenient way for exploring data?
The implementation is in my fork here: https://github.com/tanmaykm/AxisArrays.jl/tree/tan
Thanks for Require.jl package suggestion. I didn't know it.
I don't feel confortable enough with JuliaTS / AxisArray so I can't help for now about API usage but I will do it when I will have a better understanding about it.
Python xarray (formerly xray) is a very interesting package and having a Julia alternative will be a great feature.
A 3D (like Panel) data structure is a great feature to have. I will use it in https://github.com/femtotrader/DataReaders.jl (to store for example OHLCV values for several stocks). https://github.com/femtotrader/TALib.jl might also be able to support this kind of structure and apply a same indicator to several stock at once.
Maybe a function to read CSV (and XLS, XLSX) files should be add ?
Because for now I don't see any other method than reading first to a DataFrame (or a TimeArray) and convert to TArray.
julia> ta
TArray 250x6 Tuple{Date} => Tuple{Float64,Float64,Int64,Float64,Float64}
(:Date,) => (:Close,:High,:Volume,:Low,:Open)
(2012-01-03,) => (11.13,11.25,45709900,10.99,11.0)
(2012-01-04,) => (11.3,11.53,79725200,11.07,11.15)
(2012-01-05,) => (11.59,11.63,67877500,11.24,11.33)
(2012-01-06,) => (11.71,11.8,59840700,11.52,11.74)
(2012-01-09,) => (11.8,11.95,53981500,11.7,11.83)
(2012-01-10,) => (11.8,12.05,121750600,11.63,12.0)
(2012-01-11,) => (12.07,12.18,63806000,11.65,11.74)
(2012-01-12,) => (12.14,12.18,48687700,11.89,12.16)
(2012-01-13,) => (12.04,12.08,46366700,11.84,12.01)
(2012-01-17,) => (12.02,12.26,44398400,11.96,12.2)
⋮
(2012-12-17,) => (11.39,11.41,46983300,11.14,11.16)
(2012-12-18,) => (11.67,11.68,61810400,11.4,11.48)
(2012-12-19,) => (11.73,11.85,54884700,11.62,11.79)
(2012-12-20,) => (11.77,11.8,47750100,11.58,11.74)
(2012-12-21,) => (11.86,11.86,94489300,11.47,11.55)
(2012-12-24,) => (12.4,12.4,91734900,11.67,11.67)
(2012-12-26,) => (12.79,12.79,140331900,12.31,12.31)
(2012-12-27,) => (12.76,12.81,108315100,12.36,12.79)
(2012-12-28,) => (12.87,12.88,95668600,12.52,12.55)
(2012-12-31,) => (12.95,13.08,106908900,12.76,12.88)
julia> ta = TArray((:Date,), :Date=>tsOHLCV.timestamp, [symbol(n)=>tsOHLCV[n].values for n in colnames(tsOHLCV)]...)
TArray 250x6 Tuple{Date} => Tuple{Float64,Float64,Float64,Float64,Float64}
(:Date,) => (:Close,:High,:Volume,:Low,:Open)
(2012-01-03,) => (11.13,11.25,4.57099e7,10.99,11.0)
(2012-01-04,) => (11.3,11.53,7.97252e7,11.07,11.15)
(2012-01-05,) => (11.59,11.63,6.78775e7,11.24,11.33)
(2012-01-06,) => (11.71,11.8,5.98407e7,11.52,11.74)
(2012-01-09,) => (11.8,11.95,5.39815e7,11.7,11.83)
(2012-01-10,) => (11.8,12.05,1.217506e8,11.63,12.0)
(2012-01-11,) => (12.07,12.18,6.3806e7,11.65,11.74)
(2012-01-12,) => (12.14,12.18,4.86877e7,11.89,12.16)
(2012-01-13,) => (12.04,12.08,4.63667e7,11.84,12.01)
(2012-01-17,) => (12.02,12.26,4.43984e7,11.96,12.2)
⋮
(2012-12-17,) => (11.39,11.41,4.69833e7,11.14,11.16)
(2012-12-18,) => (11.67,11.68,6.18104e7,11.4,11.48)
(2012-12-19,) => (11.73,11.85,5.48847e7,11.62,11.79)
(2012-12-20,) => (11.77,11.8,4.77501e7,11.58,11.74)
(2012-12-21,) => (11.86,11.86,9.44893e7,11.47,11.55)
(2012-12-24,) => (12.4,12.4,9.17349e7,11.67,11.67)
(2012-12-26,) => (12.79,12.79,1.403319e8,12.31,12.31)
(2012-12-27,) => (12.76,12.81,1.083151e8,12.36,12.79)
(2012-12-28,) => (12.87,12.88,9.56686e7,12.52,12.55)
(2012-12-31,) => (12.95,13.08,1.069089e8,12.76,12.88)
Column order is not preserved. Maybe an OrderedDict might be use
see a similar issue here https://github.com/JuliaStats/DataFrames.jl/issues/950
I also noticed that Volume type (Int64) is not preserved. Volume seems to be converted to Float64 when using TimeArray (from TimeSeries.jl) to TArray
TimeArray stores all columns in the same array. It promoted Int64 volume column to Float64. DataFrame can handle differently typed columns though.
I think column order is not preserved because of the use of setdiff here: https://github.com/tanmaykm/JuliaTS.jl/blob/3595d41404c984209bfb0d11dad154f09a5a1e3a/src/ts.jl#L29. Will push a fix. Thanks for pointing it out.
Thanks for your help
|
2025-04-01T04:35:39.449185
| 2020-12-16T16:51:36
|
769092921
|
{
"authors": [
"cherniavskii",
"tannerlinsley"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11270",
"repo": "tannerlinsley/react-query",
"url": "https://github.com/tannerlinsley/react-query/issues/1456"
}
|
gharchive/issue
|
Tests are not running on PR
Hey there 👋
I've noticed, that tests are not running on latest PRs. Is it intentional?
Here's an example PR: https://github.com/tannerlinsley/react-query/pull/1449
And error from GitHub Actions:
They are now. We had a small typo for a bit.
|
2025-04-01T04:35:39.454156
| 2021-02-27T19:15:41
|
817997775
|
{
"authors": [
"TkDodo",
"acSpock",
"dominictwlee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11271",
"repo": "tannerlinsley/react-query",
"url": "https://github.com/tannerlinsley/react-query/issues/1872"
}
|
gharchive/issue
|
Query data not fetched initially when refetchOnMount is false in 3.8.3
Describe the bug
This seems to still be an issue: https://github.com/tannerlinsley/react-query/issues/896
As the title suggests, when refetchOnMount is false, data is not initially fetched.
To Reproduce
Open this codesandbox: https://codesandbox.io/s/reverent-cohen-97pgh?file=/src/App.js
You can see that the username is not showing up. The service call is not fired.
If you now refocus the window it fetches the data as expected.
Expected behavior
the query should fetch data when on first execution.
Browser
OS: macOS Catalina 10.15.7
Browser: Chrome Version 88.0.4324.192 (Official Build) (x86_64)
From the looks of it, you're still using v2.7.0 in your codesandbox example. Maybe try updating to the latest v2 (if you don't want a breaking change), or better yet, bump up to v3?
Upgrading the codesandbox to 2.26.4, which is the latest 2.x version, solves the issue:
https://codesandbox.io/s/friendly-kapitsa-k6kd8?file=/src/App.js
|
2025-04-01T04:35:39.458128
| 2020-12-08T20:07:01
|
759742186
|
{
"authors": [
"aaronjensen",
"tannerlinsley"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11272",
"repo": "tannerlinsley/react-query",
"url": "https://github.com/tannerlinsley/react-query/pull/1375"
}
|
gharchive/pull-request
|
feat: reset query utils
[x] Documentation
[x] Tests
@tannerlinsley thank you for your work on this. I'll see if I can help out a bit.
@tannerlinsley I added a few tests and some documentation. Let me know if you'd like me to make any tweaks. I'm also testing it out here:
https://codesandbox.io/s/react-query-3-refetch-on-error-ux360?file=/src/App.js
One thing that seems odd about it is that it does not automatically trigger a refetch. This is especially odd when using suspense because you expect the data to be present unless you set enabled: false.
Should it refetch automatically if enabled is not false?
:tada: This PR is included in version 3.2.0-beta.37 :tada:
The release is available on:
npm package (@beta dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:35:39.482992
| 2021-11-19T02:18:56
|
1058055916
|
{
"authors": [
"isaacs",
"lukekarrys"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11273",
"repo": "tapjs/node-tap",
"url": "https://github.com/tapjs/node-tap/issues/790"
}
|
gharchive/issue
|
[feat proposal] t.mockAll
Problem
t.mock replaces all requires with the passed in mock object. This creates a problem when two files both require the same module, and want to assert the results of that module.
a.js
const log = require('proc-log')
module.exports = (fn) => {
log.notice('a', 1)
log.notice('a', 2)
fn()
}
b.js
const log = require('proc-log')
module.exports = () => {
log.notice('b', 1)
log.notice('b', 2)
}
test/index.js
t.test('without mock all', async t => {
const notices = []
const a = t.mock('../a.js', {
'proc-log': {
notice: (...args) => notices.push(args)
}
})
// whoops i forgot to mock `proc-log` for `b` also
a(require('../b.js'))
// this will fail
t.strictSame(notices, [
['a', 1],
['a', 2],
['b', 1],
['b', 2],
])
})
Proposal
This is solvable in a lot of ways without needing anything from tap, but I've found that in non-trivial application I'm doing a lot of creating a mock object and passing it around between tests and files in order to keep the same object reference to assert.
I'm proposing a t.mockAll function that would take the same key/value object of paths/value that t.mock does. When called t.mockAll would mock all requires to those paths for the lifetime of t.
t.skip('with mock all', async t => {
const notices = []
t.mockAll({
'proc-log': {
notice: (...args) => notices.push(args)
}
})
const a = t.mock('../a.js')
a(t.mock('../b.js'))
t.strictSame(notices, [
['a', 1],
['a', 2],
['b', 1],
['b', 2],
])
})
Questions
Should t.mock win against conflicts with t.mockAll? Or throw an error similar to if t.mock cant find a path?
t.skip('conflicting mocks', async t => {
const notices = []
t.mockAll({
'proc-log': {
notice: (...args) => notices.push(args)
}
})
// Would this overwrite mockAll?
const a = t.mock('../a.js', {
'proc-log': {
notices: () => {}
}
})
a(t.mock('../b.js'))
t.strictSame(notices, [
['a', 1],
['a', 2],
['b', 1],
['b', 2],
])
})
In my proposal example above, I'm using t.mock for the require of ../b.js. Would it be a bad idea if worked the same if you used require instead? It wouldn't work if you did require at the top of the file and then called t.mockAll later in the file, so that feels little unexpected.
t.skip('just requires', async t => {
const notices = []
t.mockAll({
'proc-log': {
notice: (...args) => notices.push(args)
}
})
// would this work also?
const a = require('../a.js')
a(require('../b.js'))
t.strictSame(notices, [
['a', 1],
['a', 2],
['b', 1],
['b', 2],
])
})
So, the behavior would be something like:
// foo module
console.log([require('a'), require('b')])
t.mockAll({ a: '1' })
t.mock('foo', { b: '2' }) // logs [ '1', '2' ]
I think t.mock() explicit mocks should always override the object set in mockAll, since that's more local to the module loading.
For subsequent calls to mockAll should they merge in or completely replace?
t.mockAll({ a: '1' })
t.mockAll({ b: '2' })
t.mock('foo') // logs ['1', '2']? or [<actual a value>, '2']?
In my proposal example above, I'm using t.mock for the require of ../b.js. Would it be a bad idea if worked the same if you used require instead? It wouldn't work if you did require at the top of the file and then called t.mockAll later in the file, so that feels little unexpected.
It's also pretty much impossible to do with import, since you need to be able to get at the loader in order to play those sorts of games. And it's possible to do with require, but... ew, gross.
It's not in tap 18 yet, but I think it's a good idea, and won't be too hard to do. Keeping the issue open for now.
|
2025-04-01T04:35:39.489055
| 2019-04-17T21:50:54
|
434501697
|
{
"authors": [
"ASverdlov",
"vpotseluyko"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11274",
"repo": "tarantool/metrics",
"url": "https://github.com/tarantool/metrics/pull/4"
}
|
gharchive/pull-request
|
add collectors from stats
add collectors from stats
add api for enabling default metrics collection
Merged. Thanks for contribution!
|
2025-04-01T04:35:39.520679
| 2024-06-06T02:40:14
|
2337196128
|
{
"authors": [
"menghif",
"tarasglek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11275",
"repo": "tarasglek/chatcraft.org",
"url": "https://github.com/tarasglek/chatcraft.org/pull/648"
}
|
gharchive/pull-request
|
Fix for mobile Ask menu and z-index fixes
After the change in https://github.com/tarasglek/chatcraft.org/pull/642, on iOS the keyboard covers the search results in the Ask Menu.
I fixed this by moving the search bar to the bottom. Bonus: it's now easier to reach!
I left the Desktop version as is, but let me know if I should change it to match the mobile version.
I also fixed these two visual bugs:
This fixes the issue for me. thanks for the quick turnaround!
Is this good to be merged?
|
2025-04-01T04:35:39.524669
| 2024-10-24T12:00:13
|
2611355421
|
{
"authors": [
"fasilmarshooq",
"tareqimbasher"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11276",
"repo": "tareqimbasher/NetPad",
"url": "https://github.com/tareqimbasher/NetPad/issues/280"
}
|
gharchive/issue
|
result window is not sticky to the query window session during continuous execution.
When executing two different stuffs it would be great if the result window is sticky just like how we had in the linqpad.
Refer to this video ->
https://github.com/user-attachments/assets/2f76403b-d15b-4f77-8850-0b39ba42058e
It worked fine the execution is not continuous, example just print something out. But it is not working when one of them is continuous execution. For example, in my case, one window is a Kafka consumer and the other is the producer.
Hmm this might be a regression actually. I'll get this fixed.
Fix merged and will go out with next update.
|
2025-04-01T04:35:39.531601
| 2020-04-15T01:47:35
|
599961457
|
{
"authors": [
"Profpatsch",
"zeta-00"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11277",
"repo": "target/lorri",
"url": "https://github.com/target/lorri/issues/377"
}
|
gharchive/issue
|
getting lorri to work with ghcide in emacs
Describe the bug
hello there, i setup ghcide inside of a a shell.nix, and use it with lorri in emacs, anyways, ghcide works fine, the only issue that i'm having is:
ghcide is not recognizing the haskell pkgs/modules that i have installed, for example when i try to import a module, i get a ghcide error saying that the module is not being recognized? when i go into a regular nix-shell, nix builds the hoogle/haddock database, so maybe this issue has to do with lorri?
here's the 2 shell.nix files that i'm trying to use:
https://dpaste.org/2ARz
https://dpaste.org/uM64
To Reproduce
Steps to reproduce the behavior:
...
...
...
Expected behavior
Metadata
$ lorri info
<please paste output here>
$ uname -a
<please paste output here>
Additional context
Same as on the other issue:
Could you please fill out the issue template and use an inline code-block for your shell.nix? The dpaste link is already deleted.
ok, i found a workaround to get this shell.nix working, after finally fixing the nix errors, when i run this in a nix-shell, it automatically opens up emacs, and when i cd into my project that has been initialized by lorri, ghcide is a able to recognize the haskell pkgs/modules that i have installed, go ahead and close this issue, pasted below is the shell.nix file:
let
pkgs = import {};
in
with pkgs;
mkShell
{
buildInputs = with pkgs;
[
hello
figlet
(haskell.packages.ghc865.ghcWithHoogle (hpkgs: with hpkgs;
[
control
text
yesod
brick
]))
# ghcide-nix installation:
(import (builtins.fetchTarball "https://github.com/cachix/ghcide-nix/tarball/master") {}).ghcide-ghc865
]; # end of buildInputs
shellHook =
''
export PS1='\n\[\033[1;32m\][\[\e]0;nix-shell: \W\a\]nix-shell:/\W]\$ \[\033[0m\]'
export EDITOR="$(emacs)"
export HIE_HOOGLE_DATABASE="$(cat $(which hoogle) | sed -n -e 's|.*--database \\(.*\\.hoo\\).*|\\1|p')";
export NIX_GHC="$(which ghc)";
export NIX_GHCPKG="$(which ghc-pkg)";
export NIX_GHC_DOCDIR="$NIX_GHC/../../share/doc/ghc/html";
export NIX_GHC_LIBDIR="$(ghc --print-libdir)";
'';
} # end of mkShell
@Profpatsch ^
@Profpatsch , there's probably a better approach to doing this, but i'm glad it's at least working
|
2025-04-01T04:35:39.537715
| 2019-09-13T20:42:06
|
493508789
|
{
"authors": [
"rmapap",
"tylerwmarrs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11278",
"repo": "target/matrixprofile-ts",
"url": "https://github.com/target/matrixprofile-ts/issues/80"
}
|
gharchive/issue
|
Return actual distances from MASS instead of squared distances
In mass() and massStomp() in utils.py, the quotient in the calculation of the squared distance can go slightly above 1, leading to a negative difference (see line 177 and line 200).
Is there any objection to just wrapping the calculation of the squared distance in np.clip(., 0.0, None) and then taking the square root of that?
This issue is addressed in distanceProfile.py by allowing complex values: line 66, line 118, line 126. scrimp.py seems to implement its own version of MASS, and takes the absolute value before taking the square root (see #63): line 71, line 162, line 206, line 257.
SCRIMP++ aside, it seems like it might be cleaner to just clip negative values to 0 directly in mass() and massStomp().
@rmapap In regards to SCRIMP++ using absolute value - the Matlab implementation (from the UCR group) also takes this approach in resolving the issue you mention. Please refer to the code on their SCRIMP++ resource page: https://sites.google.com/site/scrimpplusplus/
@tylerwmarrs Thanks for the quick reply. I think that when this issue occurs, the values are essentially zero (e.g., -1e-12), so whether you set the value to 0 or make it positive probably doesn't really matter (except that np.abs() is probably faster). My main concern was just with having to wrap every call to MASS in np.sqrt()
|
2025-04-01T04:35:39.587462
| 2021-11-14T00:04:18
|
1052806290
|
{
"authors": [
"codecov-commenter",
"fty4"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11279",
"repo": "taskmedia/nuCal",
"url": "https://github.com/taskmedia/nuCal/pull/2"
}
|
gharchive/pull-request
|
add basic gesamtspielplan endpoint
adding endpoint to receive gesamtspielplan JSON object
Codecov Report
Merging #2 (6a604b2) into main (9e49999) will decrease coverage by 8.33%.
The diff coverage is 88.23%.
@@ Coverage Diff @@
## main #2 +/- ##
===========================================
- Coverage 100.00% 91.66% -8.34%
===========================================
Files 1 2 +1
Lines 7 24 +17
===========================================
+ Hits 7 22 +15
- Misses 0 2 +2
Impacted Files
Coverage Δ
pkg/http/rest/gesamtspielplan.go
87.50% <87.50%> (ø)
pkg/http/rest/rest.go
100.00% <100.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9e49999...6a604b2. Read the comment docs.
|
2025-04-01T04:35:39.605401
| 2023-07-17T14:33:40
|
1807925917
|
{
"authors": [
"ManifoldFR",
"coveralls",
"stephane-caron"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11280",
"repo": "tasts-robots/upkie",
"url": "https://github.com/tasts-robots/upkie/pull/86"
}
|
gharchive/pull-request
|
Fix angular velocity in UpkieWheelsEnv
The angular velocity is properly documented but misleadingly in the IMU frame, while the pitch is that of the base frame.
This PR updates the observation vector so that the angular velocity is in the base frame.
@ManifoldFR FYI. I will get back to you once the environment has been battle-tested.
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 54.206%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
116
Relevant Lines:
214
💛 - Coveralls
The angular velocity is properly documented but misleadingly in the IMU frame, while the pitch is that of the base frame.
This PR updates the observation vector so that the angular velocity is in the base frame.
@ManifoldFR FYI. I will get back to you once the environment has been battle-tested.
Ah, you think this is why the feedback MPC had some drift?
Ah, you think this is why the feedback MPC had some drift?
Having the angular velocity around the wrong axis wouldn't help for sure. It would probably drive cost tuning towards weights that discard velocities altogether → not the best performance.
To be followed up in https://github.com/tasts-robots/upkie/pull/76: I've converted it to a draft, will convert it back once it is ready.
|
2025-04-01T04:35:39.658761
| 2022-12-29T12:42:05
|
1513699706
|
{
"authors": [
"MoAlyousef",
"amrbashir"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11282",
"repo": "tauri-apps/tauri-mobile",
"url": "https://github.com/tauri-apps/tauri-mobile/issues/71"
}
|
gharchive/issue
|
cargo mobile init on windows passes UNC path to git -C
Hello
Running cargo mobile init passes a canonicalized UNC path on windows which is rejected by Git.
With mingw bash: (a similar error occurs when using cmd.exe and powershell).
$ cargo mobile init
Project name (wrp): hello
Stylized name (Hello): Hello
Domain (example.com): neurosrg.com
Detected template packs:
[0] bevy
[1] bevy-demo
[2] wgpu
[3] winit
[4] wry
Enter an index for a template pack above.
Template pack (0): 4
Generating base project...
fatal: cannot change to '\?D:devtutorial3wrp': No such file or directory
error: Failed to initialize git
Command "git -C \\\\?\\D:\\dev\\tutorial3\\wrp init" didn't complete successfully, exiting with
code 128.
This is most likely caused by this issue:
https://github.com/rust-lang/rust/issues/42869
I think just removing the call to canonicalize here should fix the issue on windows:
https://github.com/tauri-apps/tauri-mobile/blob/0b10eec445391f459af694ae6646f9dda6c60fc1/src/config/mod.rs#L132
can't reproduce with bash or powershell
Hmm maybe a toolchain issue on my side. I built cargo mobile using Rust 1.65 with stable-x86_64-pc-windows-gnu. I'll try using the msvc toolchain and report back.
Rebuilding tauri-mobile with the msvc toolchain seems to resolve the issue!
|
2025-04-01T04:35:39.662379
| 2023-08-13T19:43:35
|
1848715981
|
{
"authors": [
"ImUrX"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11283",
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/issues/7599"
}
|
gharchive/issue
|
[bug] IPC perms for plugins not given to window created from rust
Describe the bug
If you create the main window from rust and try using plugins on it, it wont have IPC permissions to do so, this only happens on windows tho, on Linux this just works.
I bypassed this by just giving IPC perms to the main window manually
"dangerousRemoteDomainIpcAccess": [
{
"domain": "tauri.localhost",
"windows": ["main"],
"plugins": ["dialog", "fs", "os", "shell", "window"]
}
]
Reproduction
Please remind me in a week if I havent done it yet, I'm making an issue just to not forget this exists :sob:
Expected behavior
No response
Platform and versions
[✘] Environment
- OS: NixOS 23.11.0 X64
✔ webkit2gtk-4.1: 2.40.5
✘ rsvg2: not installed
Visit https://tauri.app/v1/guides/getting-started/prerequisites to learn more about tauri prerequisites
✔ rustc: 1.71.1 (eb26296b5 2023-08-03)
✔ Cargo: 1.71.1 (7f1d04c00 2023-07-29)
✔ rustup: 1.26.0 (1980-01-01)
✔ Rust toolchain: 1.71.1-x86_64-unknown-linux-gnu (overridden by '/home/uri/proyects/SlimeVR-Server/rust-toolchain.toml')
- node: 18.17.1
- npm: 9.6.7
[-] Packages
- tauri [RUST]: 2.0.0-alpha.10
- tauri-build [RUST]: 2.0.0-alpha.6
- wry [RUST]: 0.28.3
- tao [RUST]: 0.19.1
- @tauri-apps/api [NPM]: 2.0.0-alpha.5
- @tauri-apps/cli [NPM]: 2.0.0-alpha.10
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:5173/
- framework: React
- bundler: Rollup
Stack trace
index-5d75d3d1.js:89 Scope not defined for window `local` and URL `https://tauri.localhost/onboarding/body-proportions/choose`. See https://tauri.app/v1/api/config/#securityconfig.dangerousremotedomainipcaccess and https://docs.rs/tauri/1/tauri/scope/struct.IpcScope.html#method.configure_remote_access
Wi @ index-5d75d3d1.js:89
(anonymous) @ index-5d75d3d1.js:3254
Promise.catch (async)
(anonymous) @ index-5d75d3d1.js:3254
D @ index-5d75d3d1.js:3230
(anonymous) @ index-5d75d3d1.js:3230
_ @ index-5d75d3d1.js:3230
choose:1
Uncaught (in promise) Scope not defined for window `local` and URL `https://tauri.localhost/onboarding/body-proportions/choose`. See https://tauri.app/v1/api/config/#securityconfig.dangerousremotedomainipcaccess and https://docs.rs/tauri/1/tauri/scope/struct.IpcScope.html#method.configure_remote_access
Additional context
No response
yeah, it was fixed in alpha.11
|
2025-04-01T04:35:39.668183
| 2020-01-05T21:35:32
|
545470064
|
{
"authors": [
"tensor-programming"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11284",
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/pull/281"
}
|
gharchive/pull-request
|
feat(core) add testing and refactor
What kind of change does this PR introduce? (check at least one)
[ ] Bugfix
[x] Feature
[ ] New Binding Issue #___
[ ] Code style update
[x] Refactor
[ ] Build-related changes
[ ] Other, please describe:
Does this PR introduce a breaking change? (check one)
[ ] Yes. Issue #___
[ ] No
The PR fulfills these requirements:
[x] It's submitted to the dev branch and not the master branch
[ ] When resolving a specific issue, it's referenced in the PR's title (e.g. fix: #xxx[,#xxx], where "xxx" is the issue number)
If adding a new feature, the PR's description includes:
[x] A convincing reason for adding this feature (to avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it)
Other information:
More refactoring and test coverage. Refactoring in service of making testing easier and making rust more idiomatic.
Not finished but open for review.
Going to merge and then Ill do another pass to build out more tests.
|
2025-04-01T04:35:39.675931
| 2020-02-09T22:45:11
|
562252604
|
{
"authors": [
"jbolda",
"nothingismagick",
"tytrdev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11285",
"repo": "tauri-apps/tauri",
"url": "https://github.com/tauri-apps/tauri/pull/409"
}
|
gharchive/pull-request
|
Migrating examples to the example repo
I'm opening this so other can monitor my changes to make sure I'm going about this correctly. Still a WIP.
What kind of change does this PR introduce? (check at least one)
[ ] Bugfix
[X] Feature
[ ] New Binding Issue #___
[ ] Code style update
[X] Refactor
[X] Build-related changes
[ ] Other, please describe:
Does this PR introduce a breaking change? (check one)
[ ] Yes. Issue #___
[ ] No
The PR fulfills these requirements:
[ ] It's submitted to the dev branch and not the master branch
[ ] When resolving a specific issue, it's referenced in the PR's title (e.g. fix: #xxx[,#xxx], where "xxx" is the issue number)
If adding a new feature, the PR's description includes:
[ ] A convincing reason for adding this feature (to avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it)
Other information:
It seems that import may not have worked. It looks like each example has the full Tauri repo in it.
https://github.com/tauri-apps/examples/tree/feature/import-examples/examples/react/create-react-app/examples/react/next.js
@nothingismagick we should be able to emulate a very similar folder structure by checking out both repos using the new functionality: https://github.com/actions/checkout#checkout-multiple-repos-nested
We will want to get the example repo squared away first before this gets merged in. @tytrdev it may make sense to update the actions in this PR as well.
@jbolda - can this be merged now?
I think we should probably update the actions as part of this PR first.
I think the github actions are nearly working, but it needs this to merge first: https://github.com/tauri-apps/examples/pull/7
|
2025-04-01T04:35:39.688684
| 2022-06-16T23:04:48
|
1274187104
|
{
"authors": [
"AmionSky",
"FabianLars",
"McZazz",
"Specy",
"amrbashir",
"rgwood",
"wusyong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11286",
"repo": "tauri-apps/wry",
"url": "https://github.com/tauri-apps/wry/issues/616"
}
|
gharchive/issue
|
Webview frozen until mouse moves on Windows
Describe the bug
On windows (webview2) after clicks sometimes noting happens until the mouse moves or until ~2 second passes.
The bug was discussed before at https://github.com/tauri-apps/tauri/issues/3691
Steps To Reproduce
cargo run --example hello_world
Open and close the hamburger menu by clicking on it and don't move the mouse
Sometimes it won't open instantly but a second later
Expected behavior
Menu opens/closes instantly always
Platform and Versions (please complete the following information):
OS: Windows 11 Pro 22000.739
Rustc: 1.61.0 (fe5b13d68 2022-05-18)
Would you assign yourself to resolve this bug?
[ ] Yes
[x] No
Additional context
Last working commit: https://github.com/tauri-apps/wry/commit/3a6eefae66c41da0f09293911675cb3c094e58b5
First commit with the bug: https://github.com/tauri-apps/wry/commit/219d20ce66a6bdf6c3e1af6156c9f2a74f2eed29
Since that commit is a merge and the commits inside of it are tricky to build I'm not certain of this but the possible culprit:
https://github.com/tauri-apps/wry/pull/414 (https://github.com/tauri-apps/wry/pull/477/commits/e056fb2a15e29de1b8ed85a548cfeb1f85031357)
From my testing the webview2-rs example does not have this issue
I found that it won't happen if control flow is poll, and Device Motion Events seem sending like crazy.
Might worth it to replace tao with winit and test it.
Okay, I got some new findings. I tried more approaches like creating the minimum webview, use winit instead, and toggle several options. They all failed. Here is the branch I tested: https://github.com/tauri-apps/wry/tree/wv2-frozon-where-es-mah-suit
What's interested is if I don't call any Windows API in thread_event_target_callback, the freeze is gone. So you either turn to ControlFlow::Poll, or remove the event target callback in tao's implementation.
I'll try to see if I can recreate it in webview2-rs.
Is this the ControlFlow::Poll in "event_loop.rs" (found in the .cargo\registry\src\githubsomenumber\tao-0.11.2\src\platform_impl\windows folder)? I poked around with it for a bit, but am unable to even hack this file to bits such that it causes a crash, or gives any response showing that my changes are propagating to a build.
Is there some step prior to or in addition to doing "cargo tauri build" that I should do to make sure whatever I change in the dependency source code reflects in a build?
I have literally all summer to figure it out :D
@McZazz Thanks for willing to help!
I have a fix in https://github.com/tauri-apps/tao/pull/427. But I'm still testing if this will bring any subtle change.
We have lighter PR in https://github.com/tauri-apps/tao/pull/465
But it probably need some more tests.
@AmionSky Did you try with DeviceEventFilter::Always?
@wusyong I have not. Sorry. With the filter set it fixes this
i'm also having the same issue
I've been experiencing this in Tauri for a while. It's a little difficult to understand how wry and the fix in tao relate to Tauri; am I correct in understanding that the fix can't be used in Tauri yet?
@rgwood In theory you can tell Cargo to use only wry from git, but iirc wry and tauri changed a bit too much compared to 1.0.5 so you probably need to use tauri from git instead, see https://tauri.app/v1/guides/faq#how-can-i-use-unpublished-tauri-changes
this now has landed in tauri, do you still face the same issue?
Looks like it's fixed. No longer have the issue in wry or tauri.
|
2025-04-01T04:35:39.692971
| 2021-09-07T08:51:20
|
989748463
|
{
"authors": [
"cronokirby"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11287",
"repo": "taurusgroup/multi-party-sig",
"url": "https://github.com/taurusgroup/multi-party-sig/issues/66"
}
|
gharchive/issue
|
Implement a more configurable logging system
Right now we end up having prints inside of our handler, but we should really configure things to accept some kind of logger interface instead.
We weren't actually logging aside from a few debug points, so removing the logs is simpler.
|
2025-04-01T04:35:39.696952
| 2015-04-12T18:48:34
|
67943334
|
{
"authors": [
"nhouse",
"oberstet"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11288",
"repo": "tavendo/AutobahnJS",
"url": "https://github.com/tavendo/AutobahnJS/issues/136"
}
|
gharchive/issue
|
"The connection to ws://.. was interrupted while the page was loading" error in FireFox
I'm using the latest versions of AutobahnJS and Thruway.
In FireFox I'm getting the following error in the console on page load:
The connection to ws://dev.pricewombat.com:9090/ was interrupted while the page was loading.
I found these topics on the discussion:
http://stackoverflow.com/questions/4812686/closing-websocket-correctly-html5-javascript
http://stackoverflow.com/questions/14140414/websocket-interrupted-while-page-is-loading-on-firefox-for-socket-io
Based on the answers there, I tried adding this inside of connection.onopen():
$(window).on('beforeunload', function(){
connection.close();
});
I also tried the non-jQuery version:
window.onbeforeunload = function() {
connection.close();
};
But the error is still there.
Related FireFox bug reports:
https://bugzilla.mozilla.org/show_bug.cgi?id=712329
https://bugzilla.mozilla.org/show_bug.cgi?id=765738
Is there any way to eliminate this error?
can't reproduce (using Crossbar.io)
|
2025-04-01T04:35:39.720316
| 2023-08-22T10:50:54
|
1861201517
|
{
"authors": [
"romain-cambonie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11289",
"repo": "taxi-gestion/client",
"url": "https://github.com/taxi-gestion/client/pull/57"
}
|
gharchive/pull-request
|
feat/planning-actions
Please review before merging.
:tada: This PR is included in version 1.24.2 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:35:39.723202
| 2019-12-12T04:23:59
|
536749139
|
{
"authors": [
"taylorlu",
"tranquangchung"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11290",
"repo": "taylorlu/Speaker-Diarization",
"url": "https://github.com/taylorlu/Speaker-Diarization/issues/25"
}
|
gharchive/issue
|
How to caculate DER, EER ?
Hello!
Thank you @taylorlu for all your work here.
I am researching on speaker diarization. I have done all tutorial from you, but i don't know how to evaluation EER, DER.
Can you support me to create ground trust and code to evaluation ?
Thank you
You mean Diarization Error Rate(DER) and Equal Error Rate(EER)?
You can refer to SimpleDER and VGG-Speaker-Recognition for more details.
thank you, 2 projects you refer very useful for me
|
2025-04-01T04:35:39.778255
| 2012-01-10T22:56:12
|
2798132
|
{
"authors": [
"misfo",
"tbranyen"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11291",
"repo": "tbranyen/backbone-boilerplate",
"url": "https://github.com/tbranyen/backbone-boilerplate/issues/23"
}
|
gharchive/issue
|
Comment clarification
evt.preventDefault() does not prevent event bubbling. return false or evt.stopPropagation() is needed to stop bubbling.
Thanks for the project!
Good catch, thank you.
|
2025-04-01T04:35:39.824912
| 2023-08-07T22:19:35
|
1840282092
|
{
"authors": [
"devsdocs",
"levlam"
],
"license": "BSL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11292",
"repo": "tdlib/telegram-bot-api",
"url": "https://github.com/tdlib/telegram-bot-api/issues/451"
}
|
gharchive/issue
|
Issue with hosting multiple local bot
Hi, I want to run multiple local bot on same local bot api server, on port 8081, but whenever one bot started, the other previously started bot get terimated. why? any additional args need to passed when starting bot api server?
my command
./telegram-bot-api --api-id=123 --api-hash="123" --local
What do you mean by "the other previously started bot get terimated"? The Bot API server has no way to "terminate" the bot.
the other local bot that already started, get terminated, I think this has something to do with localhost port maybe, any idea?
Bots use the server on the given port, they don't interfere with the server in any way. What do you mean by "the other local bot", and who terminates it?
|
2025-04-01T04:35:39.850133
| 2023-10-15T02:51:47
|
1943665446
|
{
"authors": [
"AbidMHussain",
"joeytroy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11293",
"repo": "team-inbox/inbox-reborn",
"url": "https://github.com/team-inbox/inbox-reborn/issues/77"
}
|
gharchive/issue
|
Auto Collapse the Header
Allow collapsing of the header an left panels, similar to the Simple Gmail Screen (https://chrome.google.com/webstore/detail/simple-gmail-screen/aoadinglmfmhaegojfdbeoljlnjabmkd?hl=en) - right now, I use this in conjunction with Reborn to achieve the auto-hiding of the top search field, which takes up valuable space.
Since there is another plugin to support this and no one has looked to add this feature I will close this issue.
|
2025-04-01T04:35:39.854233
| 2023-08-07T18:06:49
|
1839975594
|
{
"authors": [
"team-moeller"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11294",
"repo": "team-moeller/better-access-open-ai",
"url": "https://github.com/team-moeller/better-access-open-ai/issues/10"
}
|
gharchive/issue
|
Add function to calculate api endpoint
Add a function to calculate the api endpoint.
The endpoint depends on the model used
Source: https://platform.openai.com/docs/guides/gpt
Newer models (2023–) | gpt-4, gpt-3.5-turbo
https://api.openai.com/v1/chat/completions
Legacy models (2020–2022) | text-davinci-003, text-davinci-002, davinci, curie, babbage, ada
https://api.openai.com/v1/completions
Released with version 0.92.04
|
2025-04-01T04:35:39.886337
| 2023-05-10T10:58:44
|
1703664105
|
{
"authors": [
"dagdome"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11295",
"repo": "teamdigitale/padigitale2026.gov.it-site",
"url": "https://github.com/teamdigitale/padigitale2026.gov.it-site/pull/645"
}
|
gharchive/pull-request
|
Update faq.yml
Description
As already mentioned in #ISSUE_NUMBER, this PR tackles:
...
...
...
In particular, the ...
Checklist
[ ] I followed the indications in the CONTRIBUTING
[ ] The documentation related to the proposed change has been updated accordingly (also comments in code).
[ ] Have you written new tests for your core changes, as applicable?
[ ] Have you successfully ran tests with your changes locally?
[ ] Ready for review! :rocket:
Fixes
Fixes #
@bfabio<EMAIL_ADDRESS>please
|
2025-04-01T04:35:39.917358
| 2023-08-22T01:53:22
|
1860475353
|
{
"authors": [
"abitrolly",
"jhheider",
"mxcl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11296",
"repo": "teaxyz/cli",
"url": "https://github.com/teaxyz/cli/issues/692"
}
|
gharchive/issue
|
pkg init fails
README.md says this should work.
$ git clone https://github.com/teaxyz/pantry
$ cd pantry
$ pkg init
But it doesn't.
$ pkg init
error: pantry not found: /workspace/gitlab-development-kit/projects
$ DEBUG=1 tea -E
{
tea: "/home/gitpod/.tea/tea.xyz/v0.39.6/bin/tea",
deno: "1.36.0",
v8: "<IP_ADDRESS>",
typescript: "5.1.6"
}
/workspace/gitlab-development-kit/pantry/tea.yaml
/workspace/gitlab-development-kit/.ruby-version
/workspace/gitlab-development-kit/package.json
/workspace/gitlab-development-kit/Gemfile
resolving package graph
error: {}
I can try removing files, but it is still unclear why the error happens, and if there is a right way :tm: to fix it.
The right fix is that we shouldn't be erroring if we hit an unparsable file, but it seems like we are. Most likely Gemfile, since it's sequential.
Moved pantry checkout to the root.
$ DEBUG=1 tea -E
{
tea: "/home/gitpod/.tea/tea.xyz/v0.39.6/bin/tea",
deno: "1.36.0",
v8: "<IP_ADDRESS>",
typescript: "5.1.6"
}
/workspace/pantry/tea.yaml
resolving package graph
error: {}
The error still happens.
Stranger thing 1: pantry error reports sibling directory, which makes no sense.
$ tea pkg init
error: pantry not found: /workspace/gitlab-development-kit/projects
$ pwd
/workspace/pantry
Stranger thing 2: with DEBUG=1 the error is silenced.
$ DEBUG=1 tea pkg init
{
tea: "/home/gitpod/.tea/tea.xyz/v0.39.6/bin/tea",
deno: "1.36.0",
v8: "<IP_ADDRESS>",
typescript: "5.1.6"
}
error: {}
Well, that is peculiar. Have to see if there's more debugging feasible.
More debug output is definitely needed. After restarting the workspace, installing tea again and reentering bash I've got another error.
$ tea pkg init
error: TEA_PANTRY_PATH is not set
$ pwd
/workspace/pantry
TEA_PANTRY_PATH is set by the developer environment of pantry. If it's failing to set you can just set it yourself TEA_PANTRY_PATH=$PWD
The issues detailed in this thread are all fixed by 1.0.0-alpha.1 which I hope to release next week.
Won’t happen with v1 due to developer environments being explicit, so we won’t keep going down a tree unless you explicitly added it. Will close at v1 release.
Like: hopefully won't happen. We don't fully understand the bug here, but I suspect it was due to environments persisting when they shouldn't.
closing, reopen if it still exists in v1
|
2025-04-01T04:35:39.920064
| 2022-06-10T16:38:12
|
1267778523
|
{
"authors": [
"Sunywdev",
"kensoh",
"liunaqq"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11297",
"repo": "tebelorg/RPA-Python",
"url": "https://github.com/tebelorg/RPA-Python/issues/391"
}
|
gharchive/issue
|
I want to paste the copied content into the input box - make browser in focus
here is my code
import rpa as r
r.init(chrome_browser=True, visual_automation=True)
r.url('https://www.baidu.com')
sf = 'zzzz'
r.clipboard(sf)
r.click('//*[@id="kw"]')
r.keyboard('[ctrl]v')
I referenced the content herehttps://github.com/tebelorg/RPA-Python/issues/73
Hi @Sunywdev, try doing a r.click('image.png') before you do the r.keyboard() step. image.png can be the Chrome browser icon or some background on the webpage. The most likely reason is the web browser is not in the foreground, so [ctrl]v ends up in some other application that is in the foreground at the time r.keyboard() is executed. Doing a click visually ensures that the Chrome browser is in the foreground and in focus to paste data into it.
I can not r.click('XX.png') or r.click(XX,XX),I am bad,Do you have any other solutions?
Hi @liunaqq , can you share more on what error message do you get and the exact details of your code? Using above need visual automation.
|
2025-04-01T04:35:39.922760
| 2019-06-25T10:58:07
|
460354752
|
{
"authors": [
"kensoh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11298",
"repo": "tebelorg/TagUI-Python",
"url": "https://github.com/tebelorg/TagUI-Python/issues/16"
}
|
gharchive/issue
|
Cleaner debug log tagui_python.log when single quote ' is used [done]
Raising an issue to make a commit on above improvement
Upstream TagUI project has a limitation in live mode. It was a tradeoff to enable dynamic variables working for selectors and parameters in live mode. As a result, TagUI for Python sends the following string to TagUI when a single quote ' is used as parameter (non-identifier parameter).
'+"\'"+'
With the issue https://github.com/kelaberetiv/TagUI/issues/465 raised by user, an improvement is made upstream, using a solution for a similar problem while working on this personal side project TagUI for Python. Thus a commit can now be made here that replaces single quote ' for non-identifier parameter with
\'
This may seem like a small improvement, but it helps clarify in debug log tagui_python.log when t.debug(True) is set. Otherwise, whenever there is a ' it would result in some roundabout escape sequence above due to a limitation in upstream live mode. This upcoming commit fixes that.
Deployed as v1.5. To use, pip install tagui --upgrade
|
2025-04-01T04:35:39.948758
| 2020-09-08T19:18:34
|
696115845
|
{
"authors": [
"Smolations"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11299",
"repo": "techfort/LokiJS",
"url": "https://github.com/techfort/LokiJS/issues/862"
}
|
gharchive/issue
|
Record protos
As I was combing examples to figure out how to do stuff, I saw that the loki-continuum.js example includes db options which set an object prototype wrapper (e.g. Fund and dbOptions.funds = { proto: Fund }). However, there is no documentation on how this is to work, exactly. I assumed that all db objects would be instantiated versions, but maybe it only works when adding records? I was just trying to fetch the objects in a collection using this method but none of the new fields i added (not included in original data) were present on the returned objects, leading me to believe that find queries don't automatically instantiate the proto wrapper. Is this the case @techfort ?
Leave this issue alone, stale bot, you necessary but annoying lil thing. Let's keep this going to see how long @techfort intends to ignore it. 😏
You will never win, @stalebot. You cannot shake my resolve.
|
2025-04-01T04:35:40.051604
| 2022-03-18T13:46:06
|
1173610648
|
{
"authors": [
"BeeGrech",
"pfdaly"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11301",
"repo": "teemtee/tmt",
"url": "https://github.com/teemtee/tmt/issues/1106"
}
|
gharchive/issue
|
tmt import of metadata to support dependencies - rework repoRequires
Evaluating the fix for #1002 #1043
Test dependencies from metadata file are not translated to the resulting main.fmf
Discussed here repoRequires for restraint is more along the lines of additional scripts required for the test. I think its more like a, include this directory or script for the sync?
tmt test import --restraint
(metadata)
[General]
name=stress/stress-ng
owner=Jeff Bastian<EMAIL_ADDRESS>description=Run stress-ng tests
license=GPLv2
confidential=no
destructive=no
[restraint]
entry_point=bash ./runtest.sh
dependencies=wget;git;time;patch;bzip2;autoconf;glib2-devel;make;gettext;automake;gcc;libtool;bison;flex;libcap-devel;zlib-devel;beakerlib
softDependencies=dmidecode;rpmdevtools;libaio-devel;libattr-devel;libbsd-devel;libgcrypt-devel;libsctp-devel;keyutils-libs;beakerlib-redhat
repoRequires=cki_lib
Actual
(main.fmf)
summary: Run stress-ng tests
description: |
Run stress-ng tests
git://kernel.ubuntu.com/cking/stress-ng.git
Note: If using classes, the timeout is per stressor, not the
whole class. Run 'stress-ng --class interrupt?' to see all the
stressors in the interrupt class. Multiply the number of
stressors by the timeout to get the expected run time.
This task uses a list of stressors (see the *.stressors files)
with a 5 second timeout (by default) for each. There are 184
stressors, so 184 * 5 = expected runtime of 920 seconds.
TASK PARAMETERS
---------------
GIT_URL = URL to stress-ng git repo
default: git://kernel.ubuntu.com/cking/stress-ng.git
GIT_BRANCH = version of stress-ng to check out
default: see runtest.sh
contact: Jeff Bastian<EMAIL_ADDRESS>test: ./runtest.sh
framework: beakerlib
require:
- cki_lib
recommend:
- dmidecode
- rpmdevtools
- libaio-devel
- libattr-devel
- libbsd-devel
- libgcrypt-devel
- libsctp-devel
- keyutils-libs
- beakerlib-redhat
extra-summary: stress/stress-ng
extra-task: stress/stress-ng
Expected
(main.fmf)
summary: Run stress-ng tests
description: |
Run stress-ng tests
git://kernel.ubuntu.com/cking/stress-ng.git
Note: If using classes, the timeout is per stressor, not the
whole class. Run 'stress-ng --class interrupt?' to see all the
stressors in the interrupt class. Multiply the number of
stressors by the timeout to get the expected run time.
This task uses a list of stressors (see the *.stressors files)
with a 5 second timeout (by default) for each. There are 184
stressors, so 184 * 5 = expected runtime of 920 seconds.
TASK PARAMETERS
---------------
GIT_URL = URL to stress-ng git repo
default: git://kernel.ubuntu.com/cking/stress-ng.git
GIT_BRANCH = version of stress-ng to check out
default: see runtest.sh
contact: Jeff Bastian<EMAIL_ADDRESS>test: ./runtest.sh
framework: beakerlib
require:
-wget
-git
-time
-patch
-bzip2
-autoconf
-glib2-devel
-make
-gettext
-automake
-gcc
-libtool
-bison
-flex
-libcap-devel
-zlib-devel
-beakerlib
recommend:
- dmidecode
- rpmdevtools
- libaio-devel
- libattr-devel
- libbsd-devel
- libgcrypt-devel
- libsctp-devel
- keyutils-libs
- beakerlib-redhat
extra-summary: stress/stress-ng
extra-task: stress/stress-ng
This looks to be a minor issue which arose due to my unfamiliarity with Restraint.
I've pushed a Pull Request for a fix for this issue here: #1110
|
2025-04-01T04:35:40.060225
| 2023-04-13T14:32:28
|
1666545749
|
{
"authors": [
"dav-pascual",
"happz",
"psss",
"sbertramrh",
"thrix"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11302",
"repo": "teemtee/tmt",
"url": "https://github.com/teemtee/tmt/issues/1989"
}
|
gharchive/issue
|
Unable to clean up guest provisioned by beaker plugin
Using separate steps to provision and finish a plan with guest provisioned using the beaker plugin results in a traceback.
tmt run provision -h beaker
tmt run --last login
tmt run --last finish
The following traceback is generated:
Traceback (most recent call last):
File "/usr/bin/tmt", line 62, in <module>
tmt.cli.main()
File "/usr/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1689, in invoke
return _process_result(rv)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1626, in _process_result
value = ctx.invoke(self._result_callback, value, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/tmt/cli.py", line 357, in finito
click_context.obj.run.go()
File "/usr/lib/python3.11/site-packages/tmt/base.py", line 2707, in go
plan.go()
File "/usr/lib/python3.11/site-packages/tmt/base.py", line 1675, in go
self.finish.go()
File "/usr/lib/python3.11/site-packages/tmt/steps/finish/__init__.py", line 136, in go
guest.remove()
File "/usr/lib/python3.11/site-packages/tmt/steps/provision/mrack.py", line 504, in remove
self.api.delete()
^^^^^^^^
File "/usr/lib/python3.11/site-packages/tmt/steps/provision/mrack.py", line 373, in api
self._api = BeakerAPI(self)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/tmt/steps/provision/mrack.py", line 219, in update_wrapper
return asyncio.run(func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/tmt/steps/provision/mrack.py", line 277, in __init__
global_context = mrack.context.global_context
^^^^^^^^^^^^^
AttributeError: type object 'Any' has no attribute 'context'
It seems that after wake() up the context is not well prepared. @Tiboris, could you please have a look?
I think this might be the similar problem I ran into. I ran a test to end before finish and then trying tmt clean -v resulted in this error:
[sbertram@sbertram beaker]$ tmt clean -v
clean
guests
Stopping guests in run '/var/tmp/tmt/run-033' plan '/qcom-builder/gen3/plan'.
finish
Traceback (most recent call last):
File "/home/sbertram/.local/bin/tmt", line 14, in <module>
tmt.cli.main()
File "/home/sbertram/.local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/sbertram/.local/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/sbertram/.local/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/sbertram/.local/lib/python3.8/site-packages/click/core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "/home/sbertram/.local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/sbertram/.local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/sbertram/.local/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/cli.py", line 1513, in clean
if not clean_obj.guests():
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/base.py", line 3450, in guests
if not self._stop_running_guests(run):
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/base.py", line 3422, in _stop_running_guests
plan.finish.go()
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/steps/finish/__init__.py", line 178, in go
guest.remove()
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/steps/provision/mrack.py", line 531, in remove
self.api.delete()
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/steps/provision/mrack.py", line 401, in api
self._api = BeakerAPI(self)
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/steps/provision/mrack.py", line 221, in update_wrapper
return asyncio.run(func(*args, **kwargs))
File "/usr/lib64/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib64/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/sbertram/.local/lib/python3.8/site-packages/tmt/steps/provision/mrack.py", line 305, in __init__
global_context = mrack.context.global_context
AttributeError: '_SpecialForm' object has no attribute 'context'
This is with tmt 1.26 and mrack 1.15.1.
The workaround was to do each separately and then the problems went away.
[sbertram@sbertram beaker]$ tmt clean runs -v
clean
runs
Removing workdir '/var/tmp/tmt/run-001'.
Removing workdir '/var/tmp/tmt/run-002'.
Removing workdir '/var/tmp/tmt/run-003'.
Removing workdir '/var/tmp/tmt/run-004'.
Removing workdir '/var/tmp/tmt/run-005'.
Removing workdir '/var/tmp/tmt/run-006'.
Removing workdir '/var/tmp/tmt/run-007'.
Removing workdir '/var/tmp/tmt/run-008'.
Removing workdir '/var/tmp/tmt/run-009'.
Removing workdir '/var/tmp/tmt/run-010'.
Removing workdir '/var/tmp/tmt/run-011'.
Removing workdir '/var/tmp/tmt/run-012'.
Removing workdir '/var/tmp/tmt/run-013'.
Removing workdir '/var/tmp/tmt/run-014'.
Removing workdir '/var/tmp/tmt/run-015'.
Removing workdir '/var/tmp/tmt/run-016'.
Removing workdir '/var/tmp/tmt/run-017'.
Removing workdir '/var/tmp/tmt/run-018'.
Removing workdir '/var/tmp/tmt/run-019'.
Removing workdir '/var/tmp/tmt/run-020'.
Removing workdir '/var/tmp/tmt/run-021'.
Removing workdir '/var/tmp/tmt/run-022'.
Removing workdir '/var/tmp/tmt/run-023'.
Removing workdir '/var/tmp/tmt/run-024'.
Removing workdir '/var/tmp/tmt/run-026'.
Removing workdir '/var/tmp/tmt/run-027'.
Removing workdir '/var/tmp/tmt/run-028'.
Removing workdir '/var/tmp/tmt/run-029'.
Removing workdir '/var/tmp/tmt/run-030'.
Removing workdir '/var/tmp/tmt/run-031'.
Removing workdir '/var/tmp/tmt/run-032'.
Removing workdir '/var/tmp/tmt/run-033'.
Removing workdir '/var/tmp/tmt/run-034'.
Removing workdir '/var/tmp/tmt/run-037'.
Removing workdir '/var/tmp/tmt/run-038'.
Removing workdir '/var/tmp/tmt/run-039'.
Removing workdir '/var/tmp/tmt/run-040'.
Removing workdir '/var/tmp/tmt/run-041'.
Removing workdir '/var/tmp/tmt/run-042'.
[sbertram@sbertram beaker]$ tmt clean images -v
clean
images
testcloud
warn: Directory '/var/tmp/tmt/testcloud/images' does not exist.
[sbertram@sbertram beaker]$ tmt clean image -v
clean
images
testcloud
warn: Directory '/var/tmp/tmt/testcloud/images' does not exist.
[sbertram@sbertram beaker]$ tmt clean guest -v
clean
guests
[sbertram@sbertram beaker]$ tmt clean -v
clean
guests
runs
images
testcloud
warn: Directory '/var/tmp/tmt/testcloud/images' does not exist.
@dav-pascual, could you please have a look?
@dav-pascual, could you please have a look?
@psss Sure! I will take a look after I am back from vacation (starting tmr, until the end of august)
@dav-pascual hello, do you still plan to dedicate some of your time to work on this issue?
@dav-pascual, any update on this one?
I'll investigate this issue shortly! (beggining of oct), sorry for the delay :)
@dav-pascual, thanks! Assigning to you then. Will you be able to finish this by the end of October to make it into 1.38?
@dav-pascual howdy, any luck?
@dav-pascual, hi! Do you still plan to work on this?
|
2025-04-01T04:35:40.069269
| 2024-10-09T16:05:25
|
2576359322
|
{
"authors": [
"happz",
"psss",
"skycastlelily"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11303",
"repo": "teemtee/tmt",
"url": "https://github.com/teemtee/tmt/pull/3271"
}
|
gharchive/pull-request
|
Translate beaker.pool hardware requirement properly for mrack
Related to https://github.com/teemtee/tmt/issues/2346
Pull Request Checklist
[x] implement the feature
[ ] write the documentation
[ ] extend the test coverage
[ ] update the specification
[ ] adjust plugin docstring
[ ] modify the json schema
[ ] mention the version
[ ] include a release note
@skycastlelily I think that with the last push, you overwritten the original changes adding the implementation to tmt/steps/provision/mrack.py. All that's left now is the operator update (pre-commit & mypy complain about the list vs tuple, BTW).
It's really better to avoid force pushes, one can easily lose changes. We recommend adding new changes to PRs with new commits, e.g. named "squash: ...", eventually they would be squashed before merging, when all is done.
I think that with the last push, you overwritten the original changes
adding the implementation
That's on purpose, I already implemented beaker.pool in merged #3074 , after thinking twice, I prefer to keep the implementation way as it is.
I see, makes sense.
BTW pre-commit is still failing, that's not related to this discussion, and should be fixed.
Updated^^
On Mon, Oct 14, 2024 at 6:39 PM Miloš Prchlík @.***>
wrote:
I think that with the last push, you overwritten the original changes
adding the implementation
That's on purpose, I already implemented beaker.pool in merged #3074
https://github.com/teemtee/tmt/pull/3074 , after thinking twice, I
prefer to keep the implementation way as it is.
I see, makes sense.
BTW pre-commit is still failing, that's not related to this discussion,
and should be fixed.
—
Reply to this email directly, view it on GitHub
https://github.com/teemtee/tmt/pull/3271#issuecomment-2410800015, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AKFR23HQWBSLLQTDMMMZSQLZ3ONODAVCNFSM6AAAAABPU4G2L6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJQHAYDAMBRGU
.
You are receiving this because you were mentioned.Message ID:
@.***>
All failures are irrelevant, merging.
|
2025-04-01T04:35:40.076371
| 2016-12-06T06:45:23
|
193695853
|
{
"authors": [
"myw8",
"tegg89"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11304",
"repo": "tegg89/SRCNN-Tensorflow",
"url": "https://github.com/tegg89/SRCNN-Tensorflow/issues/1"
}
|
gharchive/issue
|
Do you have to use DCGAN with super-resolution to do something
Hi ,Do you have to use DCGAN with super-resolution to do something?
@myw8 This is nothing related to DCGAN. The reason that I included DCGAN in my reference is to figure out how to design network as a class file and following train steps.
|
2025-04-01T04:35:40.081965
| 2021-01-26T18:03:44
|
794445456
|
{
"authors": [
"jcardama",
"tehras",
"yashovardhan99"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11305",
"repo": "tehras/charts",
"url": "https://github.com/tehras/charts/issues/9"
}
|
gharchive/issue
|
Bar chart animation starts from 0 when bar.value is updated
I have a bar chart with a bar set with bar.value = 100. Then when I update this value to 150 the following happens:
From 100 it jumps to 0 and then animates back to 150.
I'm looking for a way so that instead of jumping to 0 and then animating to 150 it animates from 100 to the next new value which in this case is 150.
Is there a way to do this?
Not with the way I did it here :(.
You can probably save the previous BarChartData and pass that into BarChartData.forEachWithArea and then do the calculation based off of that.
It's a cool idea though, animating chart diffs, I didn't really think of those use cases :(
Not with the way I did it here :(.
You can probably save the previous BarChartData and pass that into BarChartData.forEachWithArea and then do the calculation based off of that.
It's a cool idea though, animating chart diffs, I didn't really think of those use cases :(
@tehras I tried a similar animation with a custom line chart library I was making.
I can try implementing something here.
Will using the barChartData.maxYValue be fine for that?
|
2025-04-01T04:35:40.082832
| 2022-04-01T21:03:07
|
1190268870
|
{
"authors": [
"SlinkousArt",
"melMass"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11306",
"repo": "teia-community/teia-ui",
"url": "https://github.com/teia-community/teia-ui/pull/53"
}
|
gharchive/pull-request
|
CSS stuff
Changed the labels for TEIA, H=N, OBJKT, edited listing layout a bit.
@SlinkousArt I rebased it to fix the build error
|
2025-04-01T04:35:40.086987
| 2023-10-26T15:47:30
|
1963861627
|
{
"authors": [
"luhanbing",
"tejado"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11307",
"repo": "tejado/obsidian-gpgCrypt",
"url": "https://github.com/tejado/obsidian-gpgCrypt/issues/2"
}
|
gharchive/issue
|
Show padlock icon next to filename in left sidebar for overview of which files are encrypted
For a much better user experience, it would be helpful to show a padlock icon in the file browser next to encrypted files to quickly see an overview of which files in the vault are encrypted.
Otherwise, you have to manually click on every single file in your vault and look for a padlock in the bottom right corner each time to learn which files are encrypted?
(If it's not possible for an Obsidian plugin to add an icon in the file browser in the left sidebar, then creating a separate left sidebar panel listing all encrypted files could be an alternative solution.)
Hi @luhanbing
Thanks for your comment.
Currently Obisidan doesn't know if a file is encrypted until the first read of the file which happens when the note is opened.
To make it more clear and still efficient (so not reading hundreds of files at Obsidian start), I could implement an optional mode that notes will be renamed to .gpg extension. Then the file type is clearly visible and can be shown in the file browser.
What do you think about this solution?
@tejado thanks for the amazing plugin!
Yes, I think the best options are either:
to create a new file in the vault for the plugin to store an index of all the encrypted files (add filename to the index when it is encrypted) and then use the index to display padlock in file browser
or, as you suggested, to rename the files with .gpg extension, which might also help prevent Obsidian indexing the encrypted file contents and trying to search encrypted files?
Implemented in release https://github.com/tejado/obsidian-gpgCrypt/releases/tag/0.2.0
I would be delighted if you could give me some feedback on this.
Unfortunately, notes with gpg file extensions have some limitations, e.g. embedding a note with gpg file extension doesn't work.
|
2025-04-01T04:35:40.088857
| 2017-05-11T19:56:24
|
228103190
|
{
"authors": [
"dunkarooftop",
"geddski",
"thorro"
],
"license": "unlicense",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11308",
"repo": "tekezo/Karabiner",
"url": "https://github.com/tekezo/Karabiner/issues/835"
}
|
gharchive/issue
|
is this project dead?
I get the whole karabiner-elements thing, but I'm wondering if this project is ever going to work on macOS? Feels abandoned.
No, Karabiner-Element is replacement for Karabiner, one day Karabiner-Element will be just like the old Karabiner. but no one know how long it will take :(
In the meantine, I use https://ei-kana.appspot.com/ as it offers more functionality than Elements. I only miss per-app settings.
|
2025-04-01T04:35:40.090950
| 2023-11-11T08:41:43
|
1988841664
|
{
"authors": [
"Tudzer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11309",
"repo": "tekgator/GameLib.NET",
"url": "https://github.com/tekgator/GameLib.NET/pull/23"
}
|
gharchive/pull-request
|
Fix ArgumentException when GOG game has no EXE path specified
I've encountered a case when GOG game registry entry (Cyberpank 2077 - EP1) has no executable path specified, as it's not a base game. The plugin uses string.Empty as default values when some registry value is not specified, but string.Empty is not a valid argument for Path.GetDirectoryName and will lead to unhandled ArgumentException being thrown by NormalizePath method.
Here is a full dump of that registry entry in case you are interested:
[HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\GOG.com\Games\1256837418]
"gameName"="Cyberpunk 2077 - EP1"
"gameID"="1256837418"
"productID"="1256837418"
"language"="english"
"lang_code"="en-US"
"path"="C:\\Games\\Cyberpunk 2077"
"startMenu"="Cyberpunk 2077 - EP1"
"ver"="2.0_PhL"
"uninstallCommand"="C:\\Games\\Cyberpunk 2077\\unins001.exe"
"dependsOn"="1423049311"
"supportLink"=""
"BUILDID"="56896837368890911"
"INSTALLDATE"="2023-09-27 15:18:00"
|
2025-04-01T04:35:40.096670
| 2022-10-17T17:50:40
|
1411976830
|
{
"authors": [
"johnbent"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11310",
"repo": "tekinged/missing",
"url": "https://github.com/tekinged/missing/issues/217"
}
|
gharchive/issue
|
MESEKEDUNG
MESEKEDUNG created by<EMAIL_ADDRESS>on 2017-03-04 18:49:02
<EMAIL_ADDRESS>replied, Anyone please confirm that MESEKEDUNG means 'about to get crowded'?
<EMAIL_ADDRESS>replied, On Sat, Mar 4, 2017 at 6:49 PM John Bent (Debugle) <<EMAIL_ADDRESS>wrote:
--- Write ABOVE THIS LINE to reply ---
<EMAIL_ADDRESS>replied, Ulang's reply somehow didn't come through. Sorry, sometimes debugle is weird. But she IM'd me and confirmed yes. Also, she says that it can be used to indicate pants becoming too tight. :)
|
2025-04-01T04:35:40.098061
| 2023-05-06T13:14:59
|
1698627883
|
{
"authors": [
"d3287t328",
"teknium1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11311",
"repo": "teknium1/GPTeacher",
"url": "https://github.com/teknium1/GPTeacher/pull/7"
}
|
gharchive/pull-request
|
Create json2markdown.py
Proposed refactor helper script to one shot all the json files in the repo into markdown.
Would it be a good idea to also create a md version of all the datasets prebuilt + this script?
The only impact is a significantly fewer number of token for each run which means faster startup time on each run and cost savings for anyone using the api.
|
2025-04-01T04:35:40.128847
| 2021-08-24T10:17:44
|
977940490
|
{
"authors": [
"imjasonh",
"tlawrie"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11312",
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/issues/4187"
}
|
gharchive/issue
|
ResolveEntrypoints looks up manifest breaking Dockhub Registry Limit
Expected Behavior
The (global) cache would be used for the image command across all TaskRuns.
Actual Behavior
Every new TaskRun generated (with no command set) seems to make its own call out to Dockerhub to retrieve the manifest, causing a count against the Dockerhub limits.
Steps to Reproduce the Problem
Create TaskRuns directly (not using a PipelineRun)
Don't specify a command
Run enough times to break the Dockerhub limit
Version Info
Kubernetes version:
Output of kubectl version:
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0+bafe72f", GitCommit:"bafe72fb05eddc8246040b9945ec242b9f805935", GitTreeState:"clean", BuildDate:"2021-03-14T16:01:39Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Tekton Pipeline version: v0.24.1
Additional Information
Pod Build resolveEntrypoints
Entrypoint Lookup
Looking at img, err = cache.Get(ctx, origRef, namespace, serviceAccountName)` I am unsure where the function its calling is. I see that it comes from the builder.
Does this mean that it only applies to each instance of a TaskRun or PipelineRun and hence across TaskRuns its always going to go out to DockerHub?
The call that hits the remote registry is here: https://github.com/tektoncd/pipeline/blob/1f5980f8c8a05b106687cfa3e5b3193c213cb66e/pkg/pod/entrypoint_lookup_impl.go#L80
We could potentially save some rate-limited requests by calling remote.Head first, which resolves a tag to a digest without incurring a rate-limited request, and if that digest is in the cache, we can get its entrypoint. If not, we still need to call remote.Image to get that entrypoint value.
There's already a global LRU cache for digest->entrypoint, we just need to use it a bit more intelligently.
https://github.com/tektoncd/pipeline/pull/4188 is an attempt to address this.
Amazing. Thanks, @imjasonh. I've learned a little bit more about the code structure yet again today.
Clarification question, it's storing in the cache with the digest as the key or the image name as the key?
The use case I am thinking of is different TaskRuns (with different parameters) but reference the same image. In this case will the image be in the cache?
The LRU cache stores digest->entrypoint. If the image isn't specified by digest, we have to resolve the name->digest, which we used to always do with an image pull, and with #4188 we'll try with a HEAD request first.
|
2025-04-01T04:35:40.135112
| 2023-05-17T20:47:00
|
1714633141
|
{
"authors": [
"Yongxuanzhang",
"lbernick"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11313",
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/issues/6680"
}
|
gharchive/issue
|
git-clone related tests are failing
Expected Behavior
The ci should pass.
Actual Behavior
We're having failing pull-tekton-pipeline-alpha-integration-tests, pull-tekton-pipeline-beta-integration-tests and pull-tekton-pipeline-integration-tests . This is blocking all of our PRs starting from today.
err logs can be viewed here:
https://prow.tekton.dev/view/gs/tekton-prow/pr-logs/pull/tektoncd_pipeline/6673/pull-tekton-pipeline-alpha-integration-tests/1658899667351506944
+ /ko-app/git-init '-url=https://github.com/tektoncd/pipeline' '-revision=' '-refspec=' '-path=/workspace/output/' '-sslVerify=true' '-submodules=true' '-depth=1' '-sparseCheckoutDirectories='
{"level":"error","ts":1684348699.0978005,"caller":"git/git.go:55","msg":"Error running git [fetch --recurse-submodules=yes --depth=1 origin --update-head-ok --force HEAD]: exit status 128\nfatal: unable to access 'https://github.com/tektoncd/pipeline/': Could not resolve host: github.com\n","stacktrace":"github.com/tektoncd/pipeline/pkg/git.run\n\tgithub.com/tektoncd/pipeline/pkg/git/git.go:55\ngithub.com/tektoncd/pipeline/pkg/git.Fetch\n\tgithub.com/tektoncd/pipeline/pkg/git/git.go:150\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/git-init/main.go:53\nruntime.main\n\truntime/proc.go:225"}
{"level":"fatal","ts":1684348699.097931,"caller":"git-init/main.go:54","msg":"Error fetching git repository: failed to fetch [HEAD]: exit status 128","stacktrace":"main.main\n\tgithub.com/tektoncd/pipeline/cmd/git-init/main.go:54\nruntime.main\n\truntime/proc.go:225"}
The error is from the git-clone task:
https://github.com/tektoncd/catalog/blob/835896be3d306356f896c51d1b53fd1130ecfa6a/task/git-clone/0.9/git-clone.yaml#L224-L232
Steps to Reproduce the Problem
Open a PR and see the ci.
Additional Info
I tried to replace the git-clone task with write-file task and those changed tests all passed. https://github.com/tektoncd/pipeline/pull/6679
Maybe one way is to replace them and move the git-clone example to no-ci
/priority critical-urgent
it seems the ci passed since https://prow.tekton.dev/view/gs/tekton-prow/pr-logs/pull/tektoncd_pipeline/6596/pull-tekton-pipeline-beta-integration-tests/1658998986159165440
It's probably because @lbernick's fix https://github.com/tektoncd/plumbing/pull/1404 works by checking the prow job yaml, after the runner image is updated to not use latest tag, the ci passed
I think it's still worth figuring out the root cause here and potentially rewriting some of our clone tests that don't need to actually clone anything 🤔
/priority critical-urgent cancel
|
2025-04-01T04:35:40.140387
| 2023-10-25T17:38:52
|
1961915979
|
{
"authors": [
"QuanZhang-William",
"Yongxuanzhang",
"chitrangpatel",
"pritidesai"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11314",
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/issues/7278"
}
|
gharchive/issue
|
unknown field EnableCELInWhenExpression
I am not sure why this only happening in my local setup or the CI is not able to catch such errors:
After installing the latest using ko apply -f config/ and running a sample pipelineRun, I am experiencing a prolonged execution with this log message:
kubectl apply -f examples/v1/pipelineruns/pipeline-with-displayname.yaml
Tekton Controller logs:
{"severity":"info","timestamp":"2023-10-25T17:18:04.601Z","logger":"tekton-pipelines-controller.event-broadcaster","caller":"record/event.go:285","message":"Event(v1.ObjectReference{Kind:\"TaskRun\", Namespace:\"default\", Name:\"sum-wct7l-sum-two-numbers\", UID:\"ae4f1f5d-3008-4414-8096-e2fff3a5d932\", APIVersion:\"tekton.dev/v1\", ResourceVersion:\"14004\", FieldPath:\"\"}): type: 'Warning' reason: 'UpdateFailed' Failed to update status for \"sum-wct7l-sum-two-numbers\": admission webhook \"webhook.pipeline.tekton.dev\" denied the request: mutation failed: cannot decode incoming new object: json: unknown field \"EnableCELInWhenExpression\"","commit":"44d77de-dirty"}
Is someone available to verify this? Appreciate if anyone can reproduce this 🙏
/cc @Yongxuanzhang please help look into this!
/assign
I was able to run it on my cluster locally after checking out main and then applying ko apply -f config/:
chitrang@chitrang-macbookpro ~/go/src/github.com/tektoncd/pipeline ⇅ main kubectl apply -f examples/v1/pipelineruns/pipeline-with-displayname.yaml
task.tekton.dev/sum created
pipeline.tekton.dev/sum-pipeline created
pipelinerun.tekton.dev/sum created
chitrang@chitrang-macbookpro ~/go/src/github.com/tektoncd/pipeline ⇅ main kubectl get pr sum
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
sum True Succeeded 56s 32s
I was not able to reproduce the issue:
kubectl apply -f examples/v1/pipelineruns/pipeline-with-displayname.yaml
k get pr
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
sum True Succeeded 4m1s 3m27s
And not able to find the related log:
k logs tekton-pipelines-controller-b8d94569f-tfm4g -n tekton-pipelines | grep 'EnableCELInWhenExpression'
I guess maybe there are some older service are not deleted properly. Maybe try ko delete -R -f config first before installing?
yup looks like it, thank you @Yongxuanzhang, @chitrangpatel, and @QuanZhang-William for helping troubleshoot this. I was running webhook from the latest release with the controller from main. Running a clean delete and reapplying from main.
I was able to resolve this by deleting and recreating using ko. Thanks!
/close
|
2025-04-01T04:35:40.144084
| 2020-06-08T09:10:11
|
634413070
|
{
"authors": [
"vdemeester",
"withlin"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11315",
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/pull/2775"
}
|
gharchive/pull-request
|
Fix missing apostrophe in tasks.md
fix missing apostrophe in tasks.md
Changes
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
[ ] Includes tests (if functionality changed/added)
[x] Includes docs (if user facing)
[x] Commit messages follow commit message best practices
/kind misc
|
2025-04-01T04:35:40.158697
| 2023-06-26T10:19:17
|
1774432944
|
{
"authors": [
"iainsproat",
"xinnjie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11316",
"repo": "tektoncd/results",
"url": "https://github.com/tektoncd/results/issues/512"
}
|
gharchive/issue
|
API OpenAPI spec for Log data is of incorrect type
Expected Behavior
Tekton's OpenAPI specification for a Log specified that data contains a value of type string.
I would expect the OpenAPI specification to match the structure of the data returned by the API.
Actual Behavior
Tekton Result API returns data not as a string, but instead as an object with properties type and value.
Steps to Reproduce the Problem
Return logs, e.g. curl -k -v https://<IP_ADDRESS>/apis/results.tekton.dev/v1alpha2/parents/default/results/5ba13b3a-6470-45b2-9f8f-c9c3c6eeeadb/logs
Observe that the OpenAPI specification does not match the returned structure of the data
Additional Info
Kubernetes version:
Output of kubectl version:
(paste your output here)
Tekton Pipeline version:
Client version: 0.31.1
Pipeline version: v0.48.0
Hi @iainsproat, sorry for the late response.
/v1alpha2/parents/{parent}/results/{result_uid}/logs return a list of Record with properties named data
of Any type. Any has properties type and value.
So this is as expected.
/close
|
2025-04-01T04:35:40.270170
| 2024-08-14T20:08:54
|
2466740945
|
{
"authors": [
"Viterbo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11318",
"repo": "telosnetwork/teloscan",
"url": "https://github.com/telosnetwork/teloscan/pull/826"
}
|
gharchive/pull-request
|
#817 | Broken token cards were restyled.
Fixes #817
Description
Broken token cards were restyled. Now they look better and use the whole space available.
Test scenarios
Activate the grid mode on the Tokens tab
https://deploy-preview-826--dev-mainnet-teloscan.netlify.app/address/0xa30b5e3c8Fee56C135Aecb733cd708cC31A5657a?tab=tokens
Screenshots
On samsung galaxy 24
Fixed low resolutions:
|
2025-04-01T04:35:40.343826
| 2019-04-30T11:27:52
|
438729792
|
{
"authors": [
"ArlonAntonius",
"Syourt",
"luceos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11321",
"repo": "tenancy/multi-tenant",
"url": "https://github.com/tenancy/multi-tenant/issues/793"
}
|
gharchive/issue
|
PHP artisan tenancy:recreate not working
Description
I'm trying to recreate my tenants using the artisan command php artisan tenancy:recreate. Something is going wrong when executing this command, giving me the following Exception:
Argument 2 passed to Hyn\Tenancy\Events\Database\ConfigurationLoading::__construct() must be of the type array, null given, called in /home/vagrant/code/vendor/hyn/multi-tenant/src/Database/Connection.php on line 315
I have inspected the code. It seems that the database connection can't be found inside my database.php config file when issuing the second tenant. This especially is strange because it's the exact same DB connection as the first tenant (tenant.amsterdam).
On line 303 inside the Connection.php file the failing config is loading. I have var_dumped the results from $website->managed_by_database_connection. This gives me the correct DB connection twice. Only the second time the $clone variable is NULL.
Information
hyn/multi-tenant version: 5.4
laravel version: 5.8
database driver and version: MariaDB (current homestead version 8.1)
webserver software and version: Vagrant homestead
php version: 7.3.1
tenancy.php config
/*
* This file is part of the hyn/multi-tenants package.
*
* (c) Daniël Klabbers<EMAIL_ADDRESS> *
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*
* @see https://laravel-tenancy.com
* @see https://github.com/hyn/multi-tenant
*/
use Hyn\Tenancy\Database\Connection;
return [
/**
* Random key used for tenants database user password
*/
'key' => env('TENANCY_KEY', env('APP_KEY')),
'models' => [
/**
* Specify different models to be used for the global, system database
* connection. These are also used in their relationships. Models
* used have to implement their respective contracts and
* either extend the SystemModel or use the trait
* UsesSystemConnection.
*/
// Must implement \Hyn\Tenancy\Contracts\Hostname
'hostname' => \Hyn\Tenancy\Models\Hostname::class,
// Must implement \Hyn\Tenancy\Contracts\Website
'website' => \App\Models\System\Tenant::class
],
/**
* The package middleware. Removing a middleware here will disable it.
* You can of course extend/replace them or add your own.
*/
'middleware' => [
// The eager identification middleware.
\Hyn\Tenancy\Middleware\EagerIdentification::class,
// The hostname actions middleware (redirects, https, maintenance).
\Hyn\Tenancy\Middleware\HostnameActions::class,
],
'website' => [
/**
* Each website has a short random hash that identifies this entity
* to the application. By default this id is randomized and fully
* auto-generated. In case you want to force your own logic for
* when you need to have a better overview of the complete
* tenants folder structure, disable this and implement
* your own id generation logic.
*/
'disable-random-id' => false,
/**
* The random Id generator is responsible for creating the hash as mentioned
* above. You can override what generator to use by modifying this value
* in the configuration.
*
* @warn This won't work if disable-random-id is true.
*/
'random-id-generator' => Hyn\Tenancy\Generators\Uuid\ShaGenerator::class,
/**
* Enable this flag in case you're using a driver that does not support
* database username or database name with a length of more than 32 characters.
*
* This should be enabled for MySQL, but not for MariaDB and PostgreSQL.
*/
'uuid-limit-length-to-32' => env('LIMIT_UUID_LENGTH_32', false),
/**
* Specify the disk you configured in the filesystems.php file where to store
* the tenants specific files, including media, packages, routes and other
* files for this particular website.
*
* @info If not set, will revert to the default filesystem.
* @info If set to false will disable all tenants specific filesystem auto magic
* like the config, vendor overrides.
*/
'disk' => null,
/**
* Automatically generate a tenants directory based on the random id of the
* website. Uses the above disk to store files to override system-wide
* files.
*
* @info set to false to disable.
*/
'auto-create-tenants-directory' => true,
/**
* Automatically rename the tenants directory when the random id of the
* website changes. This should not be too common, but in case it happens
* we automatically want to move files accordingly.
*
* @info set to false to disable.
*/
'auto-rename-tenants-directory' => true,
/**
* Automatically deletes the tenants specific directory and all files
* contained within.
*
* @see
* @info set to true to enable.
*/
'auto-delete-tenants-directory' => false,
/**
* Time to cache websites in minutes. Set to false to disable.
*/
'cache' => 10,
],
'hostname' => [
/**
* If you want the multi tenants application to fall back to a default
* hostname/website in case the requested hostname was not found
* in the database, complete in detail the default hostname.
*
* @warn this must be a FQDN, these have no protocol or path!
*/
'default' => env('TENANCY_DEFAULT_HOSTNAME'),
/**
* The package is able to identify the requested hostname by itself,
* disable to get full control (and responsibility) over hostname
* identification. The hostname identification is needed to
* set a specific website as currently active.
*
* @see src/Jobs/HostnameIdentification.php
*/
'auto-identification' => env('TENANCY_AUTO_HOSTNAME_IDENTIFICATION', true),
/**
* In case you want to have the tenancy environment set up early,
* enable this flag. This will run the tenants identification
* inside a middleware. This will eager load tenancy.
*
* A good use case is when you have set "tenants" as the default
* database connection.
*/
'early-identification' => env('TENANCY_EARLY_IDENTIFICATION', true),
/**
* Abort application execution in case no hostname was identified. This will throw a
* 404 not found in case the tenants hostname was not resolved.
*/
'abort-without-identified-hostname' => env('TENANCY_ABORT_WITHOUT_HOSTNAME', false),
/**
* Time to cache hostnames in minutes. Set to false to disable.
*/
'cache' => 10,
/**
* Automatically update the app.url configured inside Laravel to match
* the tenants FQDN whenever a hostname/tenants was identified.
*
* This will resolve issues with password reset mails etc using the
* correct domain.
*/
'update-app-url' => false,
],
'db' => [
/**
* The default connection to use; this overrules the Laravel database.default
* configuration setting. In Laravel this is normally configured to 'mysql'.
* You can set a environment variable to override the default database
* connection to - for instance - the tenants connection 'tenants'.
*/
'default' => env('TENANCY_DEFAULT_CONNECTION'),
/**
* Used to give names to the system and tenants database connections. By
* default we configure 'system' and 'tenants'. The tenants connection
* is set up automatically by this package.
*
* @see src/Database/Connection.php
* @var system-connection-name The database connection name to use for the global/system database.
* @var tenants-connection-name The database connection name to use for the tenants database.
*/
'system-connection-name' => env('TENANCY_SYSTEM_CONNECTION_NAME', Connection::DEFAULT_SYSTEM_NAME),
'tenants-connection-name' => env('TENANCY_TENANT_CONNECTION_NAME', Connection::DEFAULT_TENANT_NAME),
/**
* The tenants division mode specifies to what database websites will be
* connecting. The default setup is to use a new database per tenants.
* If using PostgreSQL, a new schema per tenants in the same database can
* be setup, by optionally setting division mode to 'schema'.
* In case you prefer to use the same database with a table prefix,
* set the mode to 'prefix'.
* To implement a custom division mode, set this to 'bypass'.
*
* @see src/Database/Connection.php
*/
'tenant-division-mode' => env('TENANCY_DATABASE_DIVISION_MODE', 'database'),
/**
* The database password generator takes care of creating a valid hashed
* string used for tenants to connect to the specific database. Do
* note that this will only work in 'division modes' that set up
* a connection to a separate database.
*/
'password-generator' => Hyn\Tenancy\Generators\Database\DefaultPasswordGenerator::class,
/**
* The tenants migrations to be run during creation of a tenants. Specify a directory
* to run the migrations from. If specified these migrations will be executed
* whenever a new tenants is created.
*
* @info set to false to disable auto migrating.
*
* @warn this has to be an absolute path, feel free to use helper methods like
* base_path() or database_path() to set this up.
*/
'tenants-migrations-path' => database_path('migrations/tenants'),
/**
* The default Seeder class used on newly created databases and while
* running artisan commands that fire seeding.
*
* @info requires tenants-migrations-path in order to seed newly created websites.
* @info seeds stored in `database/seeds/tenants` need to be configured in your composer.json classmap.
*
* @warn specify a valid fully qualified class name.
*/
'tenants-seed-class' => false,
// eg an admin seeder under `app/Seeders/AdminSeeder.php`:
// 'tenants-seed-class' => App\Seeders\AdminSeeder::class,
/**
* Automatically generate a tenants database based on the random id of the
* website.
*
* @info set to false to disable.
*/
'auto-create-tenants-database' => true,
/**
* Automatically generate the user needed to access the database.
*
* @info Useful in case you use root or another predefined user to access the
* tenants database.
* @info Only creates in case tenants databases are set to be created.
*
* @info set to false to disable.
*/
'auto-create-tenants-database-user' => true,
/**
* Set of database privileges to give to the tenants database user.
*
* @info Useful in case your database restricts the privileges you
* can set (for example AWS RDS).
* @info These privileges are only used in case tenants database users
* are set to be created.
*
* @info null by default means "ALL PRIVILEGES". Override with a list
* of privileges as a string, e.g. 'SELECT, UPDATE'.
*/
'tenants-database-user-privileges' => null,
/**
* Automatically rename the tenants database when the random id of the
* website changes. This should not be too common, but in case it happens
* we automatically want to move databases accordingly.
*
* @info set to false to disable.
*/
'auto-rename-tenants-database' => true,
/**
* Automatically deletes the tenants specific database and all data
* contained within.
*
* @info set to true to enable.
*/
'auto-delete-tenants-database' => env('TENANCY_DATABASE_AUTO_DELETE', false),
/**
* Automatically delete the user needed to access the tenants database.
*
* @info Set to false to disable.
* @info Only deletes in case tenants database is set to be deleted.
*/
'auto-delete-tenants-database-user' => env('TENANCY_DATABASE_AUTO_DELETE_USER', false),
/**
* Define a list of classes that you wish to force onto the tenants or system connection.
* The connection will be forced when the Model has booted.
*
* @info Useful for overriding the connection of third party packages.
*/
'force-tenants-connection-of-models' => [
// App\User::class
],
'force-system-connection-of-models' => [
// App\User::class
],
],
/**
* Global tenants specific routes.
* Making it easier to distinguish between landing and tenants routing.
*
* @info only works with `tenancy.hostname.auto-identification` or identification happening
* before the application is booted (eg inside middleware or the register method of
* service providers).
*/
'routes' => [
/**
* Routes file to load whenever a tenants was identified.
*
* @info Set to false or null to disable.
*/
'path' => base_path('routes/tenants/tenants.php'),
/**
* Set to true to flush all global routes before setting the routes from the
* tenants.php routes file.
*/
'replace-global' => true,
],
/**
* Folders configuration specific per tenants.
* The following section relates to configuration to files inside the tenancy/<uuid>
* tenants directory.
*/
'folders' => [
'config' => [
/**
* Merge configuration files from the config directory
* inside the tenants directory with the global configuration files.
*/
'enabled' => true,
/**
* List of configuration files to ignore, preventing override of crucial
* application configurations.
*/
'blacklist' => ['database', 'tenancy', 'webserver'],
],
'routes' => [
/**
* Allows adding and overriding URL routes inside the tenants directory.
*/
'enabled' => true,
/**
* Prefix all tenants routes.
*/
'prefix' => null,
],
'trans' => [
/**
* Allows reading translation files from a trans directory inside
* the tenants directory.
*/
'enabled' => true,
/**
* Will override the global translations with the tenants translations.
* This is done by overriding the laravel default translator with the new path.
*/
'override-global' => true,
/**
* In case you disabled global override, specify a namespace here to load the
* tenants translation files with.
*/
'namespace' => 'tenants',
],
'vendor' => [
/**
* Allows using a custom vendor (composer driven) folder inside
* the tenants directory.
*/
'enabled' => true,
],
'media' => [
/**
* Mounts the assets directory with (static) files for public use.
*/
'enabled' => true,
],
'views' => [
/**
* Enables reading views from tenants directories.
*/
'enabled' => true,
/**
* Specify a namespace to use with which to load the views.
*
* @eg setting `tenants` will allow you to use `tenants::some.blade.php`
* @info set to null to add to the global namespace.
*/
'namespace' => null,
/**
* If `namespace` is set to null (thus using the global namespace)
* make it override the global views. Disable by setting to false.
*/
'override-global' => true,
]
]
];
webserver.php config
<?php
/*
* This file is part of the hyn/multi-tenants package.
*
* (c) Daniël Klabbers<EMAIL_ADDRESS> *
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*
* @see https://laravel-tenancy.com
* @see https://github.com/hyn/multi-tenant
*/
return [
/**
* Apache2 is one of the most widely adopted webserver packages available.
*
* @see http://httpd.apache.org/docs/
* @see https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu
*/
'apache2' => [
/**
* Whether the integration with Apache2 is currently active.
*/
'enabled' => false,
/**
* Define the ports of your Apache service.
*/
'ports' => [
/**
* HTTP, non-SSL port.
*
* @default 80
*/
'http' => 80,
/**
* HTTPS, SSL port.
*
* @default 443
*/
'https' => 443
],
/**
* The generator taking care of hooking into the Apache services and files.
*/
'generator' => \Hyn\Tenancy\Generators\Webserver\Vhost\ApacheGenerator::class,
/**
* The view that holds the vhost configuration template.
*/
'view' => 'tenancy.generators::webserver.apache.vhost',
/**
* Specify the disk you configured in the filesystems.php file where to store
* the tenants vhost configuration files.
*
* @info If not set, will revert to the default filesystem.
*/
'disk' => null,
'paths' => [
/**
* Location where vhost configuration files can be found.
*/
'vhost-files' => [
'/etc/apache2/sites-enabled/'
],
/**
* Actions to run to work with the Apache2 service.
*/
'actions' => [
/**
* Action that asserts Apache2 is installed.
*/
'exists' => '/etc/init.d/apache2',
/**
* Action to run to test the apache configuration.
*
* @set to a boolean to force the response of the test command.
* @info true succeeds, false fails
*/
'test-config' => 'apache2ctl -t',
/**
* Action to run to reload the apache service.
*
* @info set to null to disable reloading.
*/
'reload' => 'apache2ctl graceful'
]
]
],
/**
* Nginx webserver support.
*
* @see http://nginx.org
*/
'nginx' => [
/**
* Whether the integration with nginx is currently active.
*/
'enabled' => false,
/**
* The php sock to be used.
*/
'php-sock' => 'unix:/var/run/php/php7.3-fpm.sock',
/**
* Define the ports of your nginx service.
*/
'ports' => [
/**
* HTTP, non-SSL port.
*
* @default 80
*/
'http' => 80,
/**
* HTTPS, SSL port.
*
* @default 443
*/
'https' => 443
],
/**
* The generator taking care of hooking into the nginx services and files.
*/
'generator' => \Hyn\Tenancy\Generators\Webserver\Vhost\NginxGenerator::class,
/**
* The view that holds the vhost configuration template.
*/
'view' => 'tenancy.generators::webserver.nginx.vhost',
/**
* Specify the disk you configured in the filesystems.php file where to store
* the tenants vhost configuration files.
*
* @info If not set, will revert to the default filesystem.
*/
'disk' => null,
'paths' => [
/**
* Location where vhost configuration files can be found.
*/
'vhost-files' => [
'/etc/nginx/sites-enabled/'
],
/**
* Actions to run to work with the Nginx service.
*/
'actions' => [
/**
* Action that asserts nginx is installed.
*/
'exists' => '/etc/init.d/nginx',
/**
* Action to run to test the nginx configuration.
*
* @info set to a boolean to force the response of the test command.
* true succeeds, false fails
*/
'test-config' => '/etc/init.d/nginx configtest',
/**
* Action to run to reload the nginx service.
*
* @info set to null to disable reloading.
*/
'reload' => '/etc/init.d/nginx reload'
]
]
]
];
Error log
[2019-04-30 12:41:18] local.ERROR: Argument 2 passed to Hyn\Tenancy\Events\Database\ConfigurationLoading::__construct() must be of the type array, null given, called in /home/vagrant/code/vendor/hyn/multi-tenant/src/Database/Connection.php on line 315 {"exception":"[object] (Symfony\\Component\\Debug\\Exception\\FatalThrowableError(code: 0): Argument 2 passed to Hyn\\Tenancy\\Events\\Database\\ConfigurationLoading::__construct() must be of the type array, null given, called in /home/vagrant/code/vendor/hyn/multi-tenant/src/Database/Connection.php on line 315 at /home/vagrant/code/vendor/hyn/multi-tenant/src/Events/Database/ConfigurationLoading.php:49)
[stacktrace]
#0 /home/vagrant/code/vendor/hyn/multi-tenant/src/Database/Connection.php(315): Hyn\\Tenancy\\Events\\Database\\ConfigurationLoading->__construct('database', NULL, Object(Hyn\\Tenancy\\Database\\Connection), Object(App\\Models\\System\\Tenant))
#1 /home/vagrant/code/vendor/hyn/multi-tenant/src/Generators/Webserver/Database/DatabaseGenerator.php(83): Hyn\\Tenancy\\Database\\Connection->generateConfigurationArray(Object(App\\Models\\System\\Tenant))
#2 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Events/Dispatcher.php(347): Hyn\\Tenancy\\Generators\\Webserver\\Database\\DatabaseGenerator->created(Object(Hyn\\Tenancy\\Events\\Websites\\Created))
#3 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Events/Dispatcher.php(196): Illuminate\\Events\\Dispatcher->Illuminate\\Events\\{closure}('Hyn\\\\Tenancy\\\\Eve...', Array)
#4 /home/vagrant/code/vendor/hyn/multi-tenant/src/Traits/DispatchesEvents.php(29): Illuminate\\Events\\Dispatcher->dispatch('Hyn\\\\Tenancy\\\\Eve...', Array)
#5 /home/vagrant/code/vendor/hyn/multi-tenant/src/Commands/RecreateCommand.php(70): Hyn\\Tenancy\\Commands\\RecreateCommand->emitEvent(Object(Hyn\\Tenancy\\Events\\Websites\\Created))
#6 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Database/Concerns/BuildsQueries.php(39): Hyn\\Tenancy\\Commands\\RecreateCommand->Hyn\\Tenancy\\Commands\\{closure}(Object(Illuminate\\Database\\Eloquent\\Collection), 1)
#7 /home/vagrant/code/vendor/hyn/multi-tenant/src/Commands/RecreateCommand.php(73): Illuminate\\Database\\Eloquent\\Builder->chunk(50, Object(Closure))
#8 [internal function]: Hyn\\Tenancy\\Commands\\RecreateCommand->handle(Object(Hyn\\Tenancy\\Database\\Connection), Object(Hyn\\Tenancy\\Repositories\\WebsiteRepository))
#9 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(32): call_user_func_array(Array, Array)
#10 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(90): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}()
#11 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(34): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure))
#12 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Container/Container.php(580): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL)
#13 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Console/Command.php(183): Illuminate\\Container\\Container->call(Array)
#14 /home/vagrant/code/vendor/symfony/console/Command/Command.php(255): Illuminate\\Console\\Command->execute(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Illuminate\\Console\\OutputStyle))
#15 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Console/Command.php(170): Symfony\\Component\\Console\\Command\\Command->run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Illuminate\\Console\\OutputStyle))
#16 /home/vagrant/code/vendor/symfony/console/Application.php(908): Illuminate\\Console\\Command->run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#17 /home/vagrant/code/vendor/symfony/console/Application.php(269): Symfony\\Component\\Console\\Application->doRunCommand(Object(Hyn\\Tenancy\\Commands\\RecreateCommand), Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#18 /home/vagrant/code/vendor/symfony/console/Application.php(145): Symfony\\Component\\Console\\Application->doRun(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#19 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Console/Application.php(90): Symfony\\Component\\Console\\Application->run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#20 /home/vagrant/code/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(122): Illuminate\\Console\\Application->run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#21 /home/vagrant/code/artisan(37): Illuminate\\Foundation\\Console\\Kernel->handle(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#22 {main}
"}
Would you be able to pr a test that confirms this? Also why are you using dot Amsterdam as a local domain 🤣
I'm not, but if you need to have some more details I'm happy to share them with you. The tenant.amsterdam, is just for testing purpose.
We'll need to confirm this issue exists with a test first. Seems likely that you are right though.
I can temporarily give you access to our Bitbucket repository so you can test the issue, does that help?
@Syourt Would love more details on this bug.
What I can read from the error message is that we're expecting a string in the managed_by_database_connection, from which we then get the connection details from the database.php config, which in your case is not giving a response.
Are you sure that the connection name you provided in the managed_by_database_connection is referencing an actual existing connection name?
Closing this issue due to inactivity.
If you feel like your issue has not been properly addressed yet, or if you still need more information, please reopen this issue 😄
If you require help with a different issue, please open a new issue.
You can also find Tenancy related help & resources on:
Our Website
Our Forums
Our Discord Server
|
2025-04-01T04:35:40.356274
| 2017-01-06T11:57:30
|
199182135
|
{
"authors": [
"ethanfrey",
"jaekwon"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11322",
"repo": "tendermint/go-merkle",
"url": "https://github.com/tendermint/go-merkle/pull/14"
}
|
gharchive/pull-request
|
Add note about Copy/Save in the README
I added a small note to the README about the interplay between Save and Copy. Please take a look here: https://github.com/ethanfrey/go-merkle/blob/readme_copy/README.md
This documentation should close Issue #8
Awesome.
|
2025-04-01T04:35:40.387768
| 2017-11-11T15:28:22
|
273152946
|
{
"authors": [
"R-0ne",
"Wuzzy2"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11323",
"repo": "tenplus1/mobs_redo",
"url": "https://github.com/tenplus1/mobs_redo/issues/129"
}
|
gharchive/issue
|
Swimming mods (fish, squids, etc.) can get stranded
It seems the behaviour of swimming mobs in the water, using floating=1 (e.g. fish) is not really smart.
It often happens that these mobs get stranded and can't move.
Evidence for this can be found in the mod mobs_mc. Watch the squid, guardian or elder guardian in water.
If they are close to the beach, and swim towards it, they climb up the beach and get stuck.
You can provoke this by punching them towards the beach.
The mobs will just climb up the beach as it were stairs.
This is also the case if you set step_height=0. There doesn't seem to be a workaround.
hi,
I don't use mobs_mc but in mobs_water is solve with :
jump =false,
stepheight = 0.1,
I do not know if it works for you, but can you try?
|
2025-04-01T04:35:40.392714
| 2020-04-14T19:00:35
|
599791408
|
{
"authors": [
"autoih",
"gabrieldemarmiesse"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11324",
"repo": "tensorflow/addons",
"url": "https://github.com/tensorflow/addons/pull/1670"
}
|
gharchive/pull-request
|
Moved run_all_in_graph_and_eager_mode in basic decoder
cc @gabrieldemarmiesse. Dividing into several parts since it's large, and not sure if removing use_gpu is a good idea. The reason is using gpu seems not helping a lot in LSTM, only if it's self-attention(transformer) based. May need your suggestions.
About the use of gpu for testing going forward, unless we're testing some code with a custom op, we need a strong reason to run in both cpu and gpu mode.
|
2025-04-01T04:35:40.401700
| 2017-11-22T15:09:27
|
276101738
|
{
"authors": [
"cihangxie",
"recluse27",
"sgfin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11325",
"repo": "tensorflow/cleverhans",
"url": "https://github.com/tensorflow/cleverhans/issues/316"
}
|
gharchive/issue
|
Best practices for data preprocessing prior to adversarial image generation?
When building neural nets based on ImageNet networks (e.g. ResNet) via transfer learning, my understanding is that best practice is to use apply the same preprocessing function that was employed during the training of the original networks. In Keras, for example, the preprocessing function is available as keras.applications.resnet50.preprocess_input, which can be called on new images prior to training/testing the fine-tuned network.
However, the preprocessing function significantly changes the appearance of the images, and we want our adversarial examples to represent small changes to real images, not small changes to preprocessed images. As such, I'm curious about what is considered the standard practice for handling preprocessing in adversarial examples for ImageNet models. Potential options I could see are:
Run standard preprocessing as in keras.applications.resnet50.preprocess_input prior to inputting into the NN and cleverhans, generate corresponding adversarial examples, and then apply a "de-processing" function that reverses the preprocessing function to view the adversarial examples.
Don't apply preprocessing at all
Apply some modified form of preprocessing that is easier to reverse
Any tips?
Normally we will use the first option. After "de-processing", that is the adversarial example you want. You can refer to the load_images() & save_images() at cleverhans/examples/nips17_adversarial_competition/sample_attacks/fgsm/attack_fgsm.py
For the second option, neural network will not recognize the image correctly, thus the whole process is invalid.
Thanks so much for the reply. I had some mixed results implementing the "de-processing" step -- which is what had led me to ask this question in the first place -- but I think I may have finally debugged it. For the benefit of any future passer-bys, the issue was that naive "de-processing" code (as implemented in the example shown above) will run into numerical issues if the original images are scaled at 256 (as in jpegs).
To demonstrate this, consider the following function, which simulates running the standard imagenet pre-processing code and then trying to "de-process" it by directly performing complementary math operations:
def depreprocess(y, finalMult = True):
x /= 255.
x *= 2.
x -= 1.
x += 1.
x /= 2.
if finalMult:
x *= 255.
return x
I'll run it on the same image (once as a png, once as a jpg) with and without the "finalMult" flag. In the case of png, it only works if you do apply the final multiplication. In the case of jpg, it only works if you don't. This makes sense from a numerical standpoint, but could screw people over if they reuse simple code.
Run on a png:
Run on a jpg:
I guess one error in your deproc() function is that you should add mean values (since you minus mean values in the pre-processing).
You can look the pre-process code: https://github.com/keras-team/keras/blob/master/keras/applications/imagenet_utils.py#L2, and try to inverse this procedure for your deproc() function. Two important variables you should notice are: data_format (channel first or channel last) and mode.
Thanks a lot!
There is a problem still. The input image is de-processing correctly but it's an opposite situation with adversarial image.
Take a look, please
https://github.com/recluse27/diplo/blob/master/deprocessing.ipynb
fgsm_params = {'eps': 0.3, 'clip_min': 0., 'clip_max': 1.} are not valid for you, since your input range are not within [0,1]. You should figure out the right values for clip_min and clip_max. Or since you are using fgsm, which is a single-step attack, you can first set clip_min to a very small value (e.g., -1000) and clip_max to a very large value (e.g., 1000); after you get your adv image, you can then clip the deprocessed image to [0,255]
|
2025-04-01T04:35:40.405846
| 2018-06-08T22:42:25
|
330822925
|
{
"authors": [
"andresusanopinto",
"bzier"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11326",
"repo": "tensorflow/hub",
"url": "https://github.com/tensorflow/hub/pull/72"
}
|
gharchive/pull-request
|
Linkify URL
On GitHub, the URL seems to automatically be linkified, but on the tensorflow site, the URL does not appear as a link.
Thanks for doing a pull-request @bzier. I have submited a similar fix in https://github.com/tensorflow/hub/commit/0010c73051f21b7e7e398c659cea9488793586b1 (so that I could fix all files).
It will take some time until this trickles into tensorflow.org.
Thanks @andresusanopinto. I didn't realize it was floating around in that many places. I had only tracked down the source for the page I was on. I appreciate you moving it forward.
|
2025-04-01T04:35:40.409467
| 2018-02-06T03:45:37
|
294622891
|
{
"authors": [
"ludwigschubert"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11327",
"repo": "tensorflow/lucid",
"url": "https://github.com/tensorflow/lucid/issues/16"
}
|
gharchive/issue
|
Consider eliminating explicit .load_graphdef() call?
Hey @colah @znah ;
should we consider automatically calling load_graphdef when instantiating a modelzoo class?
To me this boils down to:
What can you currently do with an instantiated modelzoo.Model, that you couldn't do with just the class?
It feels like this could simplify the current API, but it may also hide the fact that a graph definition may need to be downloaded.
Looking forward to your opinions!
Thanks for letting me know! Will do a PR soon.
|
2025-04-01T04:35:40.411415
| 2018-07-05T01:52:58
|
338403143
|
{
"authors": [
"Ttl",
"sethtroisi",
"vigor95"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11328",
"repo": "tensorflow/minigo",
"url": "https://github.com/tensorflow/minigo/issues/305"
}
|
gharchive/issue
|
Confused about the label z.
It seems the z label(winner of a game) is directly black(+1) or white(-1) in this implementation. But in the original paper(Mastering the Game of Go without Human Knowledge), z is the winner from the perspective of the current player at step t. It means if the current player is black and white wins finally, z at this step will be -1.
Am I wrong or this implementation also works?
Hey Vigor,
Andrew and I talked briefly about this. You are totally correct that we don't seem to be labeling the data the same as the paper. We appreciate you noticing this (and taking the time to tell us).
It's not clear if both implementations learn the same thing. We have a small test setup (starting from the initial state running training over X million examples) that we can test on but unfortunately it's not a top priority right now, I'll update this thread when I get time to run it.
ELF used the same fixed labeling and it worked pretty well for them. I don't think this is a big issue.
|
2025-04-01T04:35:40.506812
| 2017-07-11T00:47:30
|
241892254
|
{
"authors": [
"mikigom",
"mrry"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11329",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/11422"
}
|
gharchive/issue
|
No clue to inform what 'dimension' arg of argmin or argmax means in API docs
In documentation of TF API 1.2, tf.argmin and tf.argmax have dimension argument.
tf.argmin
tf.argmax
However, there is no any explanation for what it means.
The dimension argument is a deprecated synonym for axis. You should use axis in new code.
(The appearance of dimension in the generated docs suggests that this is something we should avoid in the doc generator. I'll assign this to @MarkDaoust, since he's most familiar with the recent advances in our doc generation technology!)
|
2025-04-01T04:35:40.520594
| 2017-08-10T04:37:27
|
249230979
|
{
"authors": [
"martinrosevear",
"reedwm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11330",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/12164"
}
|
gharchive/issue
|
Cannot include '*.pb.h' files in tf_tutorials.cmake
System information
Windows10
VisualStudio 2015
TensorFlow 1.3.0
Python 3.5.3
CMake 3.9.0
I am generating C++ tensorflow 'GPU' version of tf_tutorials_example_trainer example as defined in (https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/contrib/cmake) using Cmake and MSbuild.
Describe the problem
Build failing due to missing header files ../contrib/boosted_trees/proto/*.ph.h.
The *.proto files are present, and I can use protoc.exe to manually generate the *.ph.h files. This removes the 'cannot include' errors. BUT now there are linker errors when building the tf_tutorials_example_trainer.exe as it can't find the routines/structures defined in the *.pb.h files.
NOTE: *.proto files in other directories are also not expanded to *.pb.h equivalents.
Source code / logs
"c:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_tutorials_example_trainer.vcxproj" (default target) (1) ->
"C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj" (default target) (104) ->
(ClCompile target) ->
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/trees/decision_tree.h(19): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/tree_config.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\learner\common\partitioners\example_partitioner.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/learner/stochastic/stats/node-stats.h(21): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/learner.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\learner\stochastic\handlers\categorical-feature-column-handler.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/learner/stochastic/stats/node-stats.h(21): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/learner.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\learner\stochastic\handlers\bias-feature-column-handler.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/learner/stochastic/stats/node-stats.h(21): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/learner.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\learner\stochastic\handlers\dense-quantized-feature-column-handler.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/utils/dropout_utils.h(21): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/learner.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\utils\dropout_utils.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/learner/stochastic/stats/node-stats.h(21): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/learner.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\learner\stochastic\handlers\sparse-quantized-feature-column-handler.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/trees/decision_tree.h(19): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/tree_config.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\trees\decision_tree.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
C:\Users\Martin Rosevear\tensorflow\tensorflow/contrib/boosted_trees/lib/models/multiple_additive_trees.h(21): fatal error C1083: Cannot open include file: 'tensorflow/contrib/boosted_trees/proto/tree_config.pb.h': No such file or directory (compiling source file C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\boosted_trees\lib\models\multiple_additive_trees.cc) [C:\Users\Martin Rosevear\tensorflow\tensorflow\contrib\cmake\build\tf_core_kernels.vcxproj]
@mrry can you take a look?
|
2025-04-01T04:35:40.529970
| 2017-11-13T01:49:50
|
273288200
|
{
"authors": [
"cy89",
"jlebar",
"linearhit",
"mohantym",
"tatatodd",
"tensorflowbutler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11331",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/14507"
}
|
gharchive/issue
|
XLA reports error with 1000 steps of static_bidirectional_rnn
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 16.04
TensorFlow installed from (source or binary): source
TensorFlow version (use command below): 1.2.1 or 1.3
Python version: 2.7.4
Bazel version (if compiling from source): 0.4.5
GCC/Compiler version (if compiling from source): 4.8.5
CUDA/cuDNN version: 5.1
GPU model and memory: M40
Exact command to reproduce:
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the problem
This issue can only be reproduced when XLA works with static_bidirectional_rnn with 1000 steps, and the "seq_len" of static_bidirectional_rnn must be assigned, which means it works with "dynamic calculation". When the issue is reproduced, it reports:
2017-11-01 18:47:16.497266: E tensorflow/stream_executor/cuda/cuda_driver.cc:731] failed to load PTX text as a module: CUDA_ERROR_NO_BINARY_FOR_GPU
2017-11-01 18:47:16.497294: E tensorflow/stream_executor/cuda/cuda_driver.cc:736] error log buffer (163 bytes): ptxas application ptx input, line 7231; error : Kernel '_fusion_1' exceeds parameter space limit of 4352 bytes
ptxas fatal : Ptx assembly aborted due to error
From my analysis, a fused XLA instruction requires for more than 1000 input parameters. This further leads to a PTX kernel with 1000+ parameters, which is not accepted by the cuda driver.
This is what I found from the PTX ISA documents:
The maximum memory size supported by PTX for normal (non-opaque type) parameters is 4352 bytes. Prior to PTX ISA version 1.5, the maximum size was 256 bytes.
Read more at: http://docs.nvidia.com/cuda/parallel-thread-execution/index.html#ixzz4yGwCVOB7
Follow us: @GPUComputing on Twitter | NVIDIA on Facebook
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
@tatatodd could you please either comment or route to the appropriate TensorFlower?
@jlebar might be able to provide some guidance here.
@linearhit can you provide a minimal program that demonstrates the error?
Note that the whole point of static_bidirectional_rnn is that it's fully unrolled, so it's not surprising to me that unrolling 1000 steps might encounter issues; you'll end up with a large graph!
That said, it might be useful to look at exactly why we're ending up with so many parameters. E.g. I'm guessing we end up with one or more parameters per step into this fusion node, and we can probably pack these together into a single tensor.
Well that's fun.
We'll have to change our calling convention in order to fix this. Which is to say, this is a bug, we should fix it, but I'm not sure it will be simple.
@linearhit, a reproducer would be appreciated.
It has been 14 days with no activity and this issue has an assignee.Please update the label and/or status accordingly.
Hi @linearhit ! 1.x version are not supported any more . Can you please provide a simple standalone code in the 2.8 version to replicate the issue?
|
2025-04-01T04:35:40.534072
| 2019-01-22T04:04:01
|
401591061
|
{
"authors": [
"chenlh14",
"shashishekhar",
"tensorflowbutler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11332",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/25088"
}
|
gharchive/issue
|
InceptionResnetV2 quantization: block35_1/Relu is lacking min/max data
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04.5 LTS
TensorFlow installed from (source or binary): tensorflow-gpu==1.12.0
Command as follows
tflite_model = tf.contrib.lite.toco_convert(
frozen_graphdef, [images], [logits], inference_type=tf.contrib.lite.constants.QUANTIZED_UINT8,
quantized_input_stats=[(127.5, 127.5)])
Graph as follows
Provide the text output from toco_convert
F tensorflow/contrib/lite/toco/tooling_util.cc:1634] Array InceptionResnetV2/InceptionResnetV2/Repeat/block35_1/Relu, which is an input to the MaxPool operator producing the output array InceptionResnetV2/InceptionResnetV2/Mixed_6a/Branch_2/MaxPool_1a_3x3/MaxPool, is lacking min/max data, which is necessary for quantization. If accuracy matters, either target a non-quantized output format, or run quantized training with your model from a floating point checkpoint to change the input graph to contain min/max information. If you don't care about accuracy, you can pass --default_ranges_min= and --default_ranges_max= for easy experimentation.
It has been 14 days with no activity and the awaiting response label was assigned. Is this still an issue?
Closing due to inactivity, please reopen if you are still facing the issue.
|
2025-04-01T04:35:40.552121
| 2019-02-20T18:23:05
|
412564512
|
{
"authors": [
"alsrgv",
"facaiy",
"ppwwyyxx",
"qlzh727",
"rmothukuru",
"robieta",
"tensorflowbutler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11333",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/25946"
}
|
gharchive/issue
|
BUG: symbolic layer triggers device creation
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):linux ubuntu 16.04
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:n/a
TensorFlow installed from (source or binary):binary
TensorFlow version (use command below):b'v1.13.0-rc2-0-gc865ec5621' 1.13.0-rc2
Python version:3.7
Bazel version (if compiling from source):n/a
GCC/Compiler version (if compiling from source):n/a
CUDA/cuDNN version:10.0 / 7.4.2
GPU model and memory:gtx960M
Describe the current behavior
The following code:
import tensorflow as tf
a = tf.placeholder(tf.float32, [100, 100, 100, 100])
b = tf.layers.Conv2DTranspose(3, 3, data_format='channels_first')
output = b.apply(a)
prints:
2019-02-20 10:20:05.505595: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-02-20 10:20:05.578782: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-02-20 10:20:05.579477: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55fd579f65d0 executing computations on platform CUDA. Devices:
2019-02-20 10:20:05.579513: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0
2019-02-20 10:20:05.606095: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency:<PHONE_NUMBER> Hz
2019-02-20 10:20:05.606746: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55fd57b39b00 executing computations on platform Host. Devices:
2019-02-20 10:20:05.606785: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-02-20 10:20:05.607093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.0975
pciBusID: 0000:01:00.0
totalMemory: 1.96GiB freeMemory: 1.92GiB
2019-02-20 10:20:05.607118: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-02-20 10:20:05.608205: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-02-20 10:20:05.608229: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-02-20 10:20:05.608240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-02-20 10:20:05.608504: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1742 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0)
It can be seen that it initializes the GPU devices. However this should not happen in symbolic functions.
Initializing the GPU devices has many side effects.
It can lead to different types of failures, such as https://github.com/tensorflow/tensorflow/issues/8136#issuecomment-361727732. The largest side effect is that: any GPU-related flags given to a tf.Session created after device initialization will not take effect.
It will also make it much harder to use horovod because horovod requires initializing the GPU in specific ways (with visible_device_list). If a graph with Conv2DTranspose was created before creating the session (which is the standard way of using TF 1.0), horovod will fail to initialize the session. (cc @alsrgv ).
This bug exists for Conv2DTranspose, but not for Conv2D.
This bug exists in 1.13.0rc0. It does not exist in 1.12.0
This bug was introduced in https://github.com/tensorflow/tensorflow/commit/8ef3e7c8c053cb6dad530e13c478bbd406ea2c95.
In fact, the entire keras/backend.py file heavily relies on looking at the available devices.
I'm guessing we'll have to stick with https://github.com/horovod/horovod/blob/master/examples/keras_imagenet_resnet50.py#L59 in a preamble for any Keras API usage.
3 weeks with no response?
@qlzh727 Are you a good person to look at this?
I am quite occupied right now with some RNN work, but I will reroute this to the correct owner.
It's not obvious to me how one would get around this given that checking devices triggers initialization code if the device is not already initialized. NHWC vs. NCHW device compatibility issues are one of the more common difficulties encountered, hence why we check for it. Ultimately, I think @alsrgv's solution is probably correct: if you need to set specific process level config it will have to be done at the very start.
That said, if you can think of a better solution feel free to suggest it or open a PR.
Device initialization is not the only issue here.
A summary of the cause:
Certain Keras layers call the following function:
def _has_nchw_support():
explicitly_on_cpu = _is_current_explicit_device('CPU')
gpus_available = bool(_get_available_gpus())
return not explicitly_on_cpu and gpus_available
in keras/backend.py. When the function returns False but the layer is called with NCHW format, the layer will apply some format conversions, such as transpose.
There are at least three issues with this approach:
The function _has_nchw_support is clearly wrong.
Many of the involved ops supports NCHW on CPUs with MKL build, and on TPUs.
Consequences: These Keras layers do not behave properly (transpose may be added) on CPUs with MKL build or on TPUs.
Graph construction should be conceptually independent of execution.
-- This IMHO is the core beauty of a graph computation framework.
By looking at available devices for graph construction, it is making an implicit assumption that the graph will be executed on the same device, which is often not a valid assumption.
Consequences: These Keras layers do not behave properly if the graph is not executed on the same device. Examples include:
(1) Creating a graph for deployment (on different machines)
(2) Architecture search (where some worker generates graphs and other workers run it)
(3) Distributed graph with heterogeneous workers, where the whole graph can be constructed on one single worker.
The automatic format conversion, if needed, should be done on the execution level instead.
Looking at GPU devices has side effects. This is an unfortunate fact.
Consequences: After constructing the graph with these Keras layers, users cannot create sessions with custom configs, and as a result cannot use Horovod, set memory fraction, and many others.
Workaround: Create session before graph. But this would break the define-and-run standard paradigm of TF 1.0. Most code using TF is not written like this.
My recommendations:
The first issue obviously needs to be addressed.
For backward compability with previous versions, adds a switch so that these layers do not look at devices when called from tf.layers, but can look at devices when called from tf.keras.layers.
I personally prefer to see the code crash (rather than secretly transpose many times) when there are no appropriate kernels registered on the devices.
In the long run it's best to not look at devices at all and transform the graph in execution.
The implementation of
def _has_nchw_support():
explicitly_on_cpu = _is_current_explicit_device('CPU')
gpus_available = bool(_get_available_gpus())
return not explicitly_on_cpu and gpus_available
appears to have more bugs than what I pointed out above: it does not handle DeviceSpec which makes valid code to crash, reported in https://github.com/tensorflow/tensorflow/issues/27259 and https://github.com/tensorflow/tensorflow/pull/23197.
These issues do not exist in TF 1.12 when the implementation of Conv2DTranspose is not backed by Keras.
Hi There,
We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help.
This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information.
@ppwwyyxx,
Sorry for the delayed response. When we execute the code,
import tensorflow as tf
a = tf.placeholder(tf.float32, [100, 100, 100, 100])
b = tf.layers.Conv2DTranspose(3, 3, data_format='channels_first')
output = b.apply(a)
using the latest version of Tensorflow with slight modifications with respect to Compatibility, we see that GPUs are no more initialized.
Please find the Gist of the working code. Thanks!
|
2025-04-01T04:35:40.560807
| 2019-09-06T01:08:19
|
490079445
|
{
"authors": [
"annarev",
"mihaimaruseac",
"wchargin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11334",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/32270"
}
|
gharchive/issue
|
tf.estimator missing in 2019-09-05 nightlies (and broken in 2019-09-04)
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): gLinux (like Debian)
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
TensorFlow installed from (source or binary): binary
TensorFlow version (use command below): v1.12.1-10423-g11e22c0 2.0.0-dev20190905 (tf-nightly-2.0-preview==2.0.0.dev20190905)
Python version: 3.6.8rc1
Bazel version (if compiling from source): N/A
GCC/Compiler version (if compiling from source): N/A
CUDA/cuDNN version: N/A
GPU model and memory: N/A
Describe the current behavior
The tf.estimator module appears not to exist:
>>> import tensorflow as tf
>>> tf.estimator
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorflow' has no attribute 'estimator'
>>> tf.compat.v2.estimator
<module 'tensorflow_estimator.python.estimator.api._v2.estimator' from '/tmp/tmp.hdyrTvpKDw/ve/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/api/_v2/estimator/__init__.py'>
>>> tf.compat.v1.estimator
<module 'tensorflow_estimator.python.estimator.api._v1.estimator' from '/tmp/tmp.hdyrTvpKDw/ve/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/api/_v1/estimator/__init__.py'>
>>> tf.estimator
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorflow' has no attribute 'estimator'
Describe the expected behavior
The tf.estimator module should exist, given that it is documented in
the TF 2.x APIs and has been in previous nightlies.
Code to reproduce the issue
python -c '__import__("tensorflow").estimator`
Other info / logs
N/A
cc @mihaimaruseac @annarev ; this is blocking TensorBoard nightlies
because it causes our smoke tests to fail.
Sorry for the breakage! Tomorrow the issue should be fixed (thanks to Mihai).
Also today's pip packages have been removed today morning.
Great—presumably that’s this commit, then:
https://github.com/tensorflow/tensorflow/commit/18c2cf989a2263ee212fbd5ac0b3085d9450b80a
Thanks @mihaimaruseac and @annarev!
Just tested the new nightly now:
(9) mihaimaruseac@ankh:/tmp/gh/9$ python -c "import tensorflow as tf; print('__'); print(tf.__version__); print('---'); print(tf.keras); print('~~~'); print(tf.estimator)"
__
2.0.0-dev20190906
---
<module 'tensorflow_core.keras' from '/tmp/gh/9/lib/python3.6/site-packages/tensorflow_core/python/keras/api/_v2/keras/__init__.py'>
~~~
<module 'tensorflow_core.estimator' from '/tmp/gh/9/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/api/_v2/estimator/__init__.py'>
All seems good
Yep, and TensorBoard nightlies look good—thanks again! :-)
|
2025-04-01T04:35:40.575825
| 2016-07-21T00:47:17
|
166714108
|
{
"authors": [
"AdamBear",
"BruceDai003",
"Sadrpour",
"concretevitamin",
"dzhyeon",
"irfan-zoefit",
"jiapei100",
"mschonwe",
"yaroslavvb",
"yselivonchyk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11335",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/3431"
}
|
gharchive/issue
|
Getting "missing dependency declarations" with bazel 0.3.0
I just upgraded Bazel/synced and now I'm getting same errors as in #1157
bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
ERROR: /home/yaroslavvb/tensorflow.git/tensorflow/tensorflow/core/kernels/BUILD:1080:1: undeclared inclusion(s) in rule '//tensorflow/core/kernels:cwise_op_gpu':
this rule is missing dependency declarations for the following files included by 'tensorflow/core/kernels/cwise_op_gpu_floor.cu.cc':
'/usr/local/cuda-8.0/include/cuda_runtime.h'
'/usr/local/cuda-8.0/include/host_config.h'
'/usr/local/cuda-8.0/include/builtin_types.h'
'/usr/local/cuda-8.0/include/device_types.h'
'/usr/local/cuda-8.0/include/host_defines.h'
'/usr/local/cuda-8.0/include/driver_types.h'
'/usr/local/cuda-8.0/include/surface_types.h'
'/usr/local/cuda-8.0/include/texture_types.h'
...
Work-around from #1157 seems to work
add
cxx_builtin_include_directory: "/usr/local/cuda-8.0/include"
to
third_party/gpus/crosstool/CROSSTOOL
Good to know it can be made to work, @yaroslavvb thanks for persisting the workaround.
Weird that you guys aren't hitting this issue on the nightly wheels, unless the nightlies are done with newer Bazel than 0.3.0
Same "missing dependency declarations" problem still exists in head version, thanks @yaroslavvb for the workaround.
Just updated to current version cc3153a7a0a23533d14ead34db37e4ccd7892079 (v0.10.0rc0-785-gcc3153a) and got the usual missing dependency error.
HOWEVER, the new build doesn't create a CROSSTOOL file - so the usual fix doesn't work... I tried copying in the CROSSTOOL from an older revision (with the cxx_builtin_include_directory: "/usr/local/cuda-8.0/include" change) - but I'm still getting the missing dependency error.
Not sure what .tpl files are, but I tried adding the cxx_builtin change in that file, also no luck.
Installed version of CUDA and cuDNN: 8.0.26 and v5
bazel version : 0.3.0
Ubuntu 14.04.4 LTS
3xGTX980Ti
guys i am trying to
add
cxx_builtin_include_directory: "/usr/local/cuda-8.0/include"
to
third_party/gpus/crosstool/CROSSTOOL
but i get the following, where in the CROSSTOOL should i copy paste that line ?
ERROR: java.io.IOException: Could not read the crosstool configuration file 'CROSSTOOL file /home/---/Downloads/tensorflow/third_party/gpus/crosstool/CROSSTOOL', because of a parser error (1:1: Input contains unknown fields and/or extensions:
1:1: com.google.devtools.build.lib.view.config.crosstool.CrosstoolRelease.cxx_builtin_include_directory).
@Sadrpour
Exactly the same error here ... I'm testing on the most current bazel-git, namely, bazel 0.3.2
https://github.com/bazelbuild/bazel
The error message is:
ERROR: java.io.IOException: Could not read the crosstool configuration file 'CROSSTOOL file /tmp/bazel_uawUdYmF/out/external/local_config_cc/CROSSTOOL', because of a parser error (132:42: String missing ending quote.).
cheers
Pei
@Sadrpour @jiapei100 I guess the parsing error was caused because you put the line in a wrong location.
putting it like the below will cause an parsing error.
default_target_cpu: "same_as_host"
cxx_builtin_include_directory: "/usr/local/cuda-8.0/include"
putting it inside toolchain namespace like:
toolchain{
cxx_builtin_include_directory: "/usr/local/cuda-8.0/include"
}
caused no problem, but still it is causing missing dependency problem.
@dzhyeon
I actually posted a very similar issue at https://github.com/bazelbuild/bazel/issues/1996 .
Which file and which line should this line "cxx_builtin_include_directory: "/usr/local/cuda-8.0/include" be added to?
Cheers
Pei
Getting following error after updating cxx_builtin_include_directory: "/usr/local/cuda-7.5/include
bazel test -c opt --config=cuda --define using_cuda_nvcc=true --define using_gcudacc=true syntaxnet/... util/utf8/...
................
WARNING: /home/irfan/.cache/bazel/_bazel_irfan/a05fc8a5ac651b688321e83d1f272360/external/org_tensorflow/tensorflow/workspace.bzl:72:5: tf_repo_name was specified to tf_workspace but is no longer used and will be removed in the future.
ERROR: java.io.IOException: Could not read the crosstool configuration file 'CROSSTOOL file /home/irfan/.cache/bazel/_bazel_irfan/a05fc8a5ac651b688321e83d1f272360/external/local_config_cuda/crosstool/CROSSTOOL', because of a parser error (259:1: Input contains unknown fields and/or extensions:
259:1: com.google.devtools.build.lib.view.config.crosstool.CrosstoolRelease.cxx_builtin_include_directory).
INFO: Elapsed time: 6.346s
ERROR: Couldn't start the build. Unable to run tests.
Getting
ERROR: /home/eugene/.cache/bazel/_bazel_eugene/f4d185b1e9ff6ce5ee46265c62746620/external/protobuf_archive/BUILD:265:1: undeclared inclusion(s) in rule '@protobuf_archive//:js_embed':
this rule is missing dependency declarations for the following files included by 'external/protobuf_archive/src/google/protobuf/compiler/js/embed.cc':
'/usr/lib/gcc/x86_64-linux-gnu/6/include/stddef.h'
'/usr/lib/gcc/x86_64-linux-gnu/6/include/stdarg.h'
'/usr/lib/gcc/x86_64-linux-gnu/6/include/stdint.h'
Target //tensorflow/tools/pip_package:build_pip_package failed to build
while compiling TF 1.4 from source. Adding cxx_builtin_include_directory: "/usr/local/cuda-7.5/include makes no difference.
Bazel0.7.0 Cuda9 CuDNN7
Just updated to current version cc3153a (v0.10.0rc0-785-gcc3153a) and got the usual missing dependency error.
HOWEVER, the new build doesn't create a CROSSTOOL file - so the usual fix doesn't work... I tried copying in the CROSSTOOL from an older revision (with the cxx_builtin_include_directory: "/usr/local/cuda-8.0/include" change) - but I'm still getting the missing dependency error.
Not sure what .tpl files are, but I tried adding the cxx_builtin change in that file, also no luck.
Installed version of CUDA and cuDNN: 8.0.26 and v5
bazel version : 0.3.0
Ubuntu 14.04.4 LTS
3xGTX980Ti
UPDATE:
In case this helps anyone... I was able to get past the issue by setting the CUDA version explicitly when running the configure script.
How to set CUDA version explicitly?
|
2025-04-01T04:35:40.580326
| 2019-11-23T09:16:33
|
527532867
|
{
"authors": [
"NLPpupil",
"alanjuster",
"ymodak"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11336",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/34542"
}
|
gharchive/issue
|
向下兼容性太差了,一升级全完蛋,好气人啊!
This template is for miscellaneous issues not covered by the other issue categories.
For questions on how to work with TensorFlow, or support for problems that are not verified bugs in TensorFlow, please go to StackOverflow.
If you are reporting a vulnerability, please use the dedicated reporting process.
For high-level discussions about TensorFlow, please post to<EMAIL_ADDRESS>for questions about the development or internal workings of TensorFlow, or if you would like to know how to contribute to TensorFlow, please post to<EMAIL_ADDRESS>
文件路径不断地修改 换来换去,晕
I apologize, but I am having a hard time understanding what the problem is, where the problem is, and what version it affects. Please resubmit and pay attention to the issue template (https://github.com/tensorflow/tensorflow/issues/new/choose). Please provide all the information it asks. Thank you.
xswl
|
2025-04-01T04:35:40.588187
| 2020-04-14T05:57:10
|
599316405
|
{
"authors": [
"byronyi",
"fesun"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11337",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/38519"
}
|
gharchive/issue
|
ClusterSpec propagation propagates "localhost" to remote
System information
Have I written custom code (as opposed to using a stock
example script provided in TensorFlow): No
OS Platform and Distribution (e.g.,
Linux Ubuntu 16.04): CentOS
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if
the issue happens on mobile device:
TensorFlow installed from (source orbinary): source
TensorFlow version (use command below): latest master
Python version: python3.6
Bazel version (if compiling from source):
GCC/Compiler version (if compiling from source): gcc8
CUDA/cuDNN version: - GPU model and memory:
TensorFlow propagates "localhost" instead of real ip address to remote.
Demo code:
one is ps
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
server = tf.distribute.Server(tf.train.ClusterSpec({"ps" : ["ps_ip_address:5333"]}), job_name="ps", task_index=0, protocol='grpc')
print("start ps")
server.join()
one worker
import tensorflow.compat.v1 as tf
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.training import server_lib
from tensorflow.core.protobuf import cluster_pb2
import time
tf.disable_v2_behavior()
with tf.device("/job:ps/replica:0/task:0"):
a = tf.get_variable("param", [10], tf.float32, initializer=tf.zeros_initializer)
with tf.device("/job:worker/replica:0/task:0"):
update = tf.get_variable("update", [10], tf.float32, initializer=tf.ones_initializer)
add_op = a.assign_add(update)
init_op = tf.initialize_all_variables()
server = tf.distribute.Server({"localhost": ["worker_ip_address:0"]}, protocol="grpc")
cluster_def = cluster_pb2.ClusterDef()
worker_job = cluster_def.job.add()
worker_job.name = 'worker'
worker_job.tasks[0] = server.target[len('grpc://'):]
ps_job = cluster_def.job.add()
ps_job.name = "ps"
ps_job.tasks[0] = "ps_ip_address:5333"
config = config_pb2.ConfigProto(cluster_def=cluster_def,
experimental=config_pb2.ConfigProto.Experimental(share_session_state_in_clusterspec_propagation=True))
with tf.Session(server.target, config=config) as sess:
sess.run(init_op)
print(sess.run(add_op))
ps and server starts on different machines. The ps starts without worker device information and relies cluster spec propagation to propagates worker device information to ps.
However, from ps log, worker device is propagated as "localhost" to ps.
2020-04-14 13:30:21.673766: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job ps -> {0 -> localhost:5333}
2020-04-14 13:30:21.676047: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:390] Started server with target: grpc://localhost:5333
start ps
2020-04-14 13:36:33.582439: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job worker -> {0 -> localhost:51798}
2020-04-14 13:36:33.582471: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job ps -> {0 -> localhost:5333}
So ps server tries to create grpc channel to the wrong worker device localhost:51798 and the session run hangs forever.
I tried to replace worker_job.tasks[0] = server.target[len('grpc://'):] with worker_job.tasks[0] = server.target[len('grpc://'):].replace("localhost", "worker_ip_address"), but TF failed to create session with following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: The master (current machine) is not included in the provided cluster_def. job {
name: "worker"
tasks {
key: 0
value: "worker_ip_address:43479"
}
}
job {
name: "ps"
tasks {
key: 0
value: "ps_ip_address:5333"
}
}
I changed the code of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/distributed_runtime/master_session.cc#L1355 and replaced all localhost with real ip address, it works. I'm not sure if this change will cause other issues.
Any idea how to fix this generally?
Gently ping @guptapriya; Priya, mind to take a look here?
cc @saeta who seems to implemented this feature in the first place.
Close this as the PR has been merged.
|
2025-04-01T04:35:40.591712
| 2020-04-29T19:54:22
|
609317454
|
{
"authors": [
"St190706025",
"gpapan",
"karimnosseir"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11338",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/39035"
}
|
gharchive/issue
|
TF Lite Hexagon delegate support for snapdragon 865
Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template
The Snapdragon 865 (Hexagon 698 DSP) is not included in the list of supported Qualcomm SoCs:
https://www.tensorflow.org/lite/performance/hexagon_delegate
Was wondering if you have plans to add support for it to the tflite Hexagon delegate.
The guide page has only sample of the devices listed, not all.
Did you try running it on SD 865 and didn't work ?
@karimnosseir I got access to a phone equipped with the SD 865 and confirm that the delegate works after the recent update of hexagon_nn_skel to v. 1.17. Thank you!
@karimnosseir I got access to a phone equipped with the SD 865 and confirm that the delegate works after the recent update of hexagon_nn_skel to v. 1.17. Thank you!
What did you use version of tensorflow?
|
2025-04-01T04:35:40.623503
| 2020-12-29T14:16:04
|
775914553
|
{
"authors": [
"andreazignoli",
"max-poltora",
"ravikyram",
"ymodak"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11339",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/46042"
}
|
gharchive/issue
|
Tensorflow related error when deploying shiny app on shinyapps.io
I am trying to deploy shiny app, that uses reticulate and keras packages. I do not have any problem to run it locally, but real troubles appear, when I try to deploy it to shinyapps.io. My app.r file is as follows:
virtualenv_dir = Sys.getenv("VIRTUALENV_NAME")
python_path = Sys.getenv("PYTHON_PATH")
reticulate::virtualenv_create(envname = virtualenv_dir, python = python_path)
reticulate::virtualenv_install(virtualenv_dir, packages = c("numpy", "h5py", "scipy", "scikit-image", "pyyaml", "pillow"), ignore_installed = TRUE)
reticulate::use_virtualenv(virtualenv = virtualenv_dir)
library(shiny)
library(keras)
library(reticulate)
library(magick)
library(raster)
library(EBImage)
library(rdrop2)
library(plotly)
np <- import("numpy", convert=FALSE)
ndi <- import("scipy.ndimage", convert=FALSE)
segment <- import("skimage.segmentation", convert=FALSE)
feature <- import("skimage.feature", convert=FALSE)
model = load_model_hdf5("model_v02122020.h5")
ui <-
tagList(
fluidPage(
sidebarLayout(sidebarPanel(
fileInput("upload", "Choose a file", accept = c('image/png', 'image/jpeg')),
actionButton('click', 'Start')
),
mainPanel(
tabsetPanel(type="tabs",
tabPanel("Input image", plotOutput("InputImagePlot", height="100%")),
tabPanel("Output image", plotOutput("OutputImagePlot", height="100%")),
)
)
)
)
)
server <-
function(input, output, session) {
observeEvent(input$click, {
## some code for image processing
})
}
shinyApp(ui = ui, server = server)
My .Rprofile file is as follows (credit to this source):
VIRTUALENV_NAME = "virt_tf"
if (Sys.info()[["user"]] == "shiny"){
# Running on shinyapps.io
Sys.setenv(PYTHON_PATH = 'python3')
Sys.setenv(VIRTUALENV_NAME = VIRTUALENV_NAME) # Installs into default shiny virtualenvs dir
Sys.setenv(RETICULATE_PYTHON = paste0('/home/shiny/.virtualenvs/', VIRTUALENV_NAME, '/bin/python'))
} else if (Sys.info()[["user"]] == "rstudio-connect"){
# Running on remote server
Sys.setenv(PYTHON_PATH = '/opt/python/3.7.6/bin/python')
Sys.setenv(VIRTUALENV_NAME = paste0(VIRTUALENV_NAME, '/')) # include '/' => installs into rstudio-connect/apps/
Sys.setenv(RETICULATE_PYTHON = paste0(VIRTUALENV_NAME, '/bin/python'))
} else {
# Running locally
options(shiny.port = 7450)
Sys.setenv(PYTHON_PATH = 'python 3.6.12')
Sys.setenv(VIRTUALENV_NAME = VIRTUALENV_NAME) # exclude '/' => installs into ~/.virtualenvs/
# RETICULATE_PYTHON is not required locally, RStudio infers it based on the ~/.virtualenvs path
}
The deployment process seems to run completely according to R log:
rsconnect::deployApp()
Preparing to deploy application...Update application currently deployed at
https://name.shinyapps.io/appname/? [Y/n] y
DONE
Uploading bundle for application: 3428026...DONE
Deploying bundle: 4035381 for application: 3428026 ...
Waiting for task: 846214175
building: Parsing manifest
building: Building image: 4594673
building: Installing system dependencies
building: Fetching packages
building: Installing packages
building: Installing files
building: Pushing image: 4594673
deploying: Starting instances
terminating: Stopping old instances
Application successfully deployed to https://name.shinyapps.io/appname/
The error I get from the bottom of the log:
Error in value[3L] : Installation of TensorFlow not found.
Python environments searched for 'tensorflow' package:
You can install TensorFlow using the install_tensorflow() function.
/home/shiny/.virtualenvs/virt_tf/bin/python3
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne ->
Execution halted
When I try to include tensorflow into the list of packages required to be installed into my virtualenvironment I get the following error message:
Downloading tensorflow-2.3.1-cp35-cp35m-manylinux2010_x86_64.whl (320.4 MB)
Collecting tensorflow
Killed
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne ->
Error in value[3L] :
Error installing package(s): 'numpy', 'h5py', 'scipy', 'scikit-image', 'pyyaml', 'pillow', 'tensorflow'
Execution halted
Out of memory!
As far as I understand, shinyapps.io pushes me to install tensorflow package into virtualenvironment. However, I guess, it should be in the list of available packages. But how to force using it?
@max-poltora
Please, fill issue template
Can you refer this link and see if it helps you.Thanks!
@ravikyram
I have filled issue template, when I created this issue. Or did I fill in the wrong template? Please let me know, which one shall I fill in then?
Thank you for the link. However, tensorflow, keras and my app run locally wiht no issues. I get problems, when I try to deploy my app. So it should be from the shinyapps server side, or do I misunderstend it?
@max-poltora
Just to understand better , is this issue is related to Tensorflow?
I think issue is related to shinyapps. Please, clarify
Thanks!
@ravikyram
The issue is related to tensorflow, when I deploy my app to shinyapps. I get the error message:
Error in value[3L] : Installation of TensorFlow not found
@ravikyram,
Basically, what I can observe, my app lacks tensorflow in that same environment on shinyapps.io. When I try to install it with, for instance install_keras(method="virtualenv", envname=virtualenv_dir, tensorflow = "gpu") or reticulate::virtualenv_install(virtualenv_dir, packages = c("numpy", "h5py", "scipy", "scikit-image", "pyyaml", "pillow", "tensorflow"), ignore_installed = TRUE), it just kills the process, because tensorflow package is too heavy:
2020-12-30T09:34:23.855506+00:00 shinyapps[3428026]: Collecting tensorflow-gpu==2.2.0
2020-12-30T09:34:36.957986+00:00 shinyapps[3428026]: Downloading tensorflow_gpu-2.2.0-cp35-cp35m-manylinux2010_x86_64.whl (516.2 MB)
2020-12-30T09:34:42.960610+00:00 shinyapps[system]: Out of memory!
2020-12-30T09:34:42.943096+00:00 shinyapps[3428026]: Error in value[3L] :
2020-12-30T09:34:42.933600+00:00 shinyapps[3428026]: Killed
2020-12-30T09:34:42.943098+00:00 shinyapps[3428026]: Error installing package(s): 'tensorflow-gpu==2.2.0', 'keras', 'tensorflow-hub', 'h5py', 'pyyaml==3.12', 'requests', 'Pillow', 'scipy'
2020-12-30T09:34:42.943099+00:00 shinyapps[3428026]: Calls: local ... tryCatch -> tryCatchList -> tryCatchOne ->
2020-12-30T09:34:42.943123+00:00 shinyapps[3428026]: Execution halted
RStudio community can be a right platform to raise this issue.
I see you have already raised this issue on RStudio (tagging for visibility)
RStudio community can be a right platform to raise this issue.
I see you have already raised this issue on RStudio (tagging for visibility)
@ymodak, thank you for the reply. I have indeed posted this issue on Rstudio and was able to make application run, when upgrading my plan to basic. However, now I see another error message in logs:
2021-01-11T13:39:36.408265+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408173: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shiny/.virtualenvs/myenv/lib:/opt/R/3.6.1/lib/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server
2021-01-11T13:39:36.408282+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408229: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2021-01-11T13:39:36.416118+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.416030: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a6da190 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-01-11T13:39:36.408540+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408481: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2021-01-11T13:39:36.415464+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.415385: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency:<PHONE_NUMBER> Hz
2021-01-11T13:39:36.408294+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408269: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (614c4c4b9dd3): /proc/driver/nvidia/version does not exist
2021-01-11T13:39:36.416129+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.416087: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-01-11T13:39:37.193196+00:00 shinyapps[3428026]:
2021-01-11T13:39:37.193199+00:00 shinyapps[3428026]: Listening on http://<IP_ADDRESS>:36677
2021-01-11T13:40:26.655510+00:00 shinyapps[3428026]: Press Esc/Ctrl + C to abort
2021-01-11T13:40:26.657269+00:00 shinyapps[3428026]: https://www.dropbox.com/oauth2/authorize?client_id=mmhfsybffdom42w&redirect_uri=http%3A%2F%2Flocalhost%3A1410%2F&response_type=code&state=tPAJ0o83jt
2021-01-11T13:40:26.655193+00:00 shinyapps[3428026]: Waiting for authentication in browser...
2021-01-11T13:40:26.656945+00:00 shinyapps[3428026]: Please point your browser to the following url:
@ymodak, thank you for the reply. I have indeed posted this issue on Rstudio and was able to make application run, when upgrading my plan to basic. However, now I see another error message in logs:
2021-01-11T13:39:36.408265+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408173: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shiny/.virtualenvs/myenv/lib:/opt/R/3.6.1/lib/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server
2021-01-11T13:39:36.408282+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408229: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2021-01-11T13:39:36.416118+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.416030: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a6da190 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-01-11T13:39:36.408540+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408481: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2021-01-11T13:39:36.415464+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.415385: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency:<PHONE_NUMBER> Hz
2021-01-11T13:39:36.408294+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.408269: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (614c4c4b9dd3): /proc/driver/nvidia/version does not exist
2021-01-11T13:39:36.416129+00:00 shinyapps[3428026]: 2021-01-11 13:39:36.416087: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-01-11T13:39:37.193196+00:00 shinyapps[3428026]:
2021-01-11T13:39:37.193199+00:00 shinyapps[3428026]: Listening on http://<IP_ADDRESS>:36677
2021-01-11T13:40:26.655510+00:00 shinyapps[3428026]: Press Esc/Ctrl + C to abort
2021-01-11T13:40:26.657269+00:00 shinyapps[3428026]: https://www.dropbox.com/oauth2/authorize?client_id=mmhfsybffdom42w&redirect_uri=http%3A%2F%2Flocalhost%3A1410%2F&response_type=code&state=tPAJ0o83jt
2021-01-11T13:40:26.655193+00:00 shinyapps[3428026]: Waiting for authentication in browser...
2021-01-11T13:40:26.656945+00:00 shinyapps[3428026]: Please point your browser to the following url:
@ymodak, yes I have found the solution by upgrading my shinyapps plan up to basic. Thank you for your attention.
@ymodak, yes I have found the solution by upgrading my shinyapps plan up to basic. Thank you for your attention.
@max-poltora
Please, close this thread if your issue was resolved. Thanks!
@max-poltora
Please, close this thread if your issue was resolved. Thanks!
I know I might be late for the party, but I have experienced the same issue when trying to deploy a Tf model on Shiny. As far as I understood after cross referencing is that using Tf inside a Python venv really requires a lot of memory that the Free program of Shiny does not handle. The Tf model in my case was running correctly without Python venv with Tf R package, but I understand a Python venv might be required anyway. If this is teh case, I tried Tf Lite instead of Tf and it worked out! Only thing is: R does not uses float32 (standard for Tf Lite models) so keep in mind that you need to numpy convert from float64 to float 32 and then set the tensor of the TF Lite interpreter with those values.
|
2025-04-01T04:35:40.632711
| 2021-01-28T16:34:11
|
796165043
|
{
"authors": [
"amahendrakar",
"bergen288"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11340",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/46759"
}
|
gharchive/issue
|
How to install Tensorflow on AIX7.2 server
I am trying to install Tensorflow on AIX7.2 server. Usually, the "any" wheel file is good on AIX. Otherwise, I may try to build it with tar.zip source file. However, Tensorflow has neither "any" wheel file nor tar.zip source file at PYPI website. I tried to download tensorflow-master.zip file here, but the installation try failed as there is no setup.py file. Is there any chance I can install Tensorflow on AIX?
Thanks.
@bergen288,
In order to expedite the trouble-shooting process, could you please provide the following information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
TensorFlow installed from (source or binary):
TensorFlow version:
Python version:
Installed using virtualenv? pip? conda?:
Bazel version (if compiling from source):
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version:
GPU model and memory:
Also, please take a look at this guide to build TensorFlow from source and check if it helps. Thanks!
@bergen288,
In order to expedite the trouble-shooting process, could you please provide the following information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
TensorFlow installed from (source or binary):
TensorFlow version:
Python version:
Installed using virtualenv? pip? conda?:
Bazel version (if compiling from source):
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version:
GPU model and memory:
Also, please take a look at this guide to build TensorFlow from source and check if it helps. Thanks!
Below is the information. I checked your guide. It looks like Bazel is required in order to build Tensorflow from source. Unfortunately, I don't see AIX is listed as supported in Bazel website. Sounds like I can't install Tensorflow on AIX anyway.
OS Platform: AIX7.2
Mobile device: N/A
TensorFlow installed from (source or binary): source (to be installed)
TensorFlow version: 2.4?
Python version: 3.7.9
Installed using virtualenv? pip? conda?: pip
Bazel version (if compiling from source): to be installed (unsupported?)
GCC/Compiler version (if compiling from source): 8.3.0
CUDA/cuDNN version: ?
GPU model and memory: ?
Below is the information. I checked your guide. It looks like Bazel is required in order to build Tensorflow from source. Unfortunately, I don't see AIX is listed as supported in Bazel website. Sounds like I can't install Tensorflow on AIX anyway.
OS Platform: AIX7.2
Mobile device: N/A
TensorFlow installed from (source or binary): source (to be installed)
TensorFlow version: 2.4?
Python version: 3.7.9
Installed using virtualenv? pip? conda?: pip
Bazel version (if compiling from source): to be installed (unsupported?)
GCC/Compiler version (if compiling from source): 8.3.0
CUDA/cuDNN version: ?
GPU model and memory: ?
|
2025-04-01T04:35:40.641289
| 2022-07-24T10:19:29
|
1315866514
|
{
"authors": [
"Apprisco",
"tilakrayal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11341",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/56878"
}
|
gharchive/issue
|
using tf.function on a model call will not utilize gpu nor VRAM but log placement to gpu
Click to expand!
Issue Type
Bug
Source
source
Tensorflow Version
2.9.1
Custom Code
Yes
OS Platform and Distribution
Windows 11
Mobile device
No response
Python version
3.9.12
Bazel version
No response
GCC/Compiler version
No response
CUDA/cuDNN version
11.2/8.6
GPU model and memory
3090 24GB
Current Behaviour?
Utilize tf.function on a function that calls model(a,b) (aka model.call)
Although eager tensors have all been allocated to the gpu, actual training doesnt happen on the gpu, rather the cpu. Notice 20x slower training speeds than when utilizing nested tf.functions in model.call, which is not allowed in MirroredStrategy.
Standalone code to reproduce the issue
Model.call:
def call(self,x,y):
vals=self.disc_gradients(x,y)
self.gen_gradients(x,y)
return vals
model call wrapper:
@tf.function#this is the problematic tf.function, if this tf.function is moved to the below tf.functions it works fine.
def run_batch(self,batch_x,batch_y):
val=self.model(batch_x,batch_y)
return val
gradients functions:
<EMAIL_ADDRESS>def get_disc_gradients(self,x,y):
#calc val
self.disc_optimizer.apply_gradients(disc_gradients,vars)
return disc_gradients
Obviously this code currently does nothing, it's a minimal reproducible example.
### Relevant log output
```shell
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance.
The above log is relevant as it tries to make us wrap the entire model.call in a tf.function, which is where I discovered this entire issue. This issue is replicateable w/o using MirroredStrategy: all that matters is that tf.function is wrapping a model call.
No other relevant log output could be found.
Only way to notice the issues are the extremely long batch times, along with
the GPU only utilizing 3GB of VRAM compared to a full 20gb when the tf.function is moved.
To be more specific, wrapping a training loop for a nested keras.model in any way will cause this issue. Wrapping each individual gradients functions in @tf.function prevents this issue, but is bad since we can't use mirroredstrategy.
It appears @tf.function placement on nested models matters greatly for gradient calculation: the lack of GPU vram usage is from missing gradients- although it doesn't explain the 700s batch times and clear cpu usage over GPU. Something is seriously wrong here, haha.
@Apprisco,
I tried to execute the provided code. Kindly find the gist of it here. The code provided is not complete hence it would be difficult for us to pinpoint the issue.
Could you share complete stand alone code to replicate the issue or a colab gist with the error reported. Thank you!
It wouldn't be ideal to share my current code as it is for an upcoming research paper, it may take a bit but let me produce a minimal reproducible example.
Update: turns out my nested models were not models but layers: this is still an issue regarding tf.function, but different scope.
Issue went away when we changed to actually using nested models, not gigantic custom layers. I still believe this be an issue but this is no longer one that concerns us. Especially since we moved on to horovod, not mirroredstrategy.
Error has returned, but again when using nested layers not models haha
|
2025-04-01T04:35:40.687893
| 2016-02-02T03:15:38
|
130558795
|
{
"authors": [
"Billy4195",
"jendap",
"mxrguspxrt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11342",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/957"
}
|
gharchive/issue
|
Tutorial ImportError: No module named examples.tutorials.mnist.input_data
I am beginner in tensorflower
install tensorflow with pip
sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl
I want to start with the tutorial "MNIST For ML Beginners"
but first import get error...
import tensorflow.examples.tutorials.mnist.input_data
File "<stdin>", line 1, in <module>
ImportError: No module named examples.tutorials.mnist.input_data
OS:Ubuntu 14.04
run on virtualbo
Stack overflow is perhaps better for these sort of questions. Have you tried
https://www.google.com/search?q=ImportError%3A+No+module+named+examples.tutorials.mnist.input_data
BTW: It is something is wrong with the python environment. Have the pip finished without error? Are you using virtualenv?
thanks for your help
pip install finished without error and I have not using virtualenv
I will ask on Stack Overflow
Examples are missing from the .whl installer for Mac. (VirtualEnv, python2.7).
Most of the examples do not work, because import path is incorrect and have to be changed manually.
|
2025-04-01T04:35:40.689375
| 2018-09-29T12:59:02
|
365130387
|
{
"authors": [
"lanhin",
"tensorflowbutler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11343",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/22616"
}
|
gharchive/pull-request
|
Comment fix: cudnn output tensor data_format conversion.
The output tensor data_format conversion in core/kernels/conv_ops.cc should be "from NCHW to NHWC" since the called function is functor::NCHWToNHWC<GPUDevice, T, 4>().
Nagging Assignee @caisq: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
|
2025-04-01T04:35:40.690381
| 2016-06-24T02:18:36
|
162062809
|
{
"authors": [
"tensorflow-jenkins",
"xmbrst"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11344",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/3015"
}
|
gharchive/pull-request
|
On branch sitenav
Reorganizes tutorial navigation.
Changes to be committed:
modified: tensorflow/g3doc/tutorials/leftnav_files
Can one of the admins verify this patch?
|
2025-04-01T04:35:40.691477
| 2019-09-10T23:55:54
|
491950651
|
{
"authors": [
"autoih"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11345",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/32402"
}
|
gharchive/pull-request
|
Using inexpensive opcodes to remove a list
Using fewer/inexpensive opcodes to remove a list; this is useful when a list is large.
I'll temporarily close this one; try to rewrite this part later. Thanks @alextp.
|
2025-04-01T04:35:40.696153
| 2016-11-17T01:28:07
|
189906980
|
{
"authors": [
"asimshankar",
"gunan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11346",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/5658"
}
|
gharchive/pull-request
|
Produce binary release tarballs for the TensorFlow C API
These scripts are intended to be run with every release to
produce libtensorflow.tar.gz for CPU and GPU on Linux and OS X
for x86_64 architecture machines.
(Eventually there will be other operating systems and architectures).
These binary releases are then intended to make use of other language
bindings (such as Rust, Haskell, Go) easier
as the common case would be to download the binary C-library release and
avoid the need to build TensorFlow from source (and all the time and
external dependencies doing so entails).
Files:
tensorflow/tools/ci_build/builds/libtensorflow.sh - Baseline common script to build a tarball
tensorflow/osx/libtensorflow_{cpu,gpu}.sh - Build tarballs for OS X with and without GPU support
tensorflow/linux: Has similar top level scripts, but the build happens in a docker container so it contains 4 files - the two top level builds, one shared libtensorflow_docker.sh that is used to build and execute the docker container and libtensorflow.sh which is the script run inside the container.
Jenkins, test this please
The test failures seemed unrelated, retrying.
Jenkins, test this please
both failures are known issues. Merging.
|
2025-04-01T04:35:40.698229
| 2022-09-13T06:25:36
|
1370922229
|
{
"authors": [
"benbarsdell"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11347",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/57676"
}
|
gharchive/pull-request
|
Fix hang bug with cuda_malloc_async allocator
Allocator::DeallocateRaw is called from within a stream callback to ensure stream-aware behavior. However, it is unsafe to call CUDA APIs from inside a stream callback. While BFCAllocator does not call any CUDA APIs in DeallocateRaw, the cuda_malloc and cuda_malloc_async allocators do (cuMemFree and cuMemFreeAsync), and this was observed cause hangs in several models.
This PR identifies callback-unsafe allocators and calls their DeallocateRaw method directly instead of from the stream callback (the effect is the same, but this way is safe).
cc @nluehr @pjannaty
Closing in favor of https://github.com/tensorflow/tensorflow/pull/57841
|
2025-04-01T04:35:40.701348
| 2017-01-07T02:38:10
|
199335324
|
{
"authors": [
"drpngx",
"gunan",
"jart",
"tensorflow-jenkins",
"yaroslavvb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11348",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/6710"
}
|
gharchive/pull-request
|
Update jpeg.BUILD
Fixes #6706
Can one of the admins verify this patch?
Jenkins, test this please.
It looks like the issue is caused by the change here:
ERROR: /workspace/tensorflow/core/platform/default/build_config/BUILD:108:1: error loading package '@jpeg//': Encountered error while reading extension file 'third_party/common.bzl': no such package '@org_tensorflow//third_party': error loading package 'external': The repository named 'org_tensorflow' could not be resolved and referenced by '//tensorflow/core/platform/default/build_config:jpeg'.
@jart @damienmg Could you take a look at this change?
@jart should I remove the load statement, and simply redefine the routine in jpeg.BUILD and llvm.BUILD for now? I can add a TODO to fix this later, and we will unblock @yaroslavvb
Yes that's probably the wisest course of action right now, per discussion in related issue.
Uh oh, it seems to failing the build.
I have contacted bazel team to see how we can work around the problem.
In the meantime, I think we should close this PR as we will not be able to accept it.
|
2025-04-01T04:35:40.703281
| 2024-09-04T00:40:24
|
2504062320
|
{
"authors": [
"Jerry-Ge",
"keerthanakadiri"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11349",
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/75075"
}
|
gharchive/pull-request
|
[Tosa] Update Sin/Cos operators legalization
with the introduction of tosa.sin and tosa.cos ops
update the legalization to do direct mapping
hi @jpienaar, could you help review/merge this patch? thanks :)
Hi @rdzhabarov, can you help review this?
Hi @jpienaar , Can you please review this PR? Thank you !
Hi @jpienaar , the CI looks good. Can we submit this patch now? Thanks!
|
2025-04-01T04:35:40.728835
| 2024-03-13T01:20:16
|
2182937577
|
{
"authors": [
"diptanu",
"tzumby"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11350",
"repo": "tensorlakeai/indexify",
"url": "https://github.com/tensorlakeai/indexify/issues/405"
}
|
gharchive/issue
|
Returning sources for a RAG
Hi there,
I'm trying to return the content sources when running questions through a basic RAG.
I found this example in langchain that looks very similar to the way you're retrieving answers in the basic RAG example.
def format_docs(docs):
print(doc.page_content)
return "\n\n".join(doc.page_content for doc in docs)
rag_chain_from_docs = (
{"context": retriever, "question": RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))}
| prompt
| model
| StrOutputParser()
)
When I try to print the context, I get a list of documents:
[Document(page_content="Content 1"), Document(page_content="Content 2")]
But when I add this:
rag_chain_with_source = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
).assign(answer=rag_chain_from_docs)
I get an TypeError: Object of type Document is not JSON serializable
I'm actively debugging this and I feel I'm missing something very basic, but any help would be much appreciated!
@tzumby Looking into this. Thanks for reporting.
@tzumby This looks like an open issue in Langchain and a workaround is mentioned here - https://github.com/langchain-ai/langchain/issues/2222#issuecomment-1911839856
Please let me know if this solves your issue!
Thank you @diptanu, I'm going to track this there. I'm taking a step back and trying to understand Langchain a bit more. Once I figure this out, I can contribute with some docs to Indexify if you think it would fit in with the rest of the resources.
|
2025-04-01T04:35:40.751289
| 2024-09-24T08:06:55
|
2544671602
|
{
"authors": [
"athul-bos-semi",
"mbahnasTT",
"mywoodstock"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11351",
"repo": "tenstorrent/tt-metal",
"url": "https://github.com/tenstorrent/tt-metal/issues/13039"
}
|
gharchive/issue
|
[Bug Report] Bilinear Upsampling seems to cause a problem
Describe the bug
The program was working, until after Bilinear Upsampling stage. The very next convolution gets stuck in the HWCommandQueue_write_buffer stage according to Tracy. I’m also attaching the results of Tracy for reference.
To Reproduce
Steps to reproduce the behavior:
tt-smi -r 2
python text_unet.py
(You can find the files here)
Expected behavior
The program should run to completion.
Screenshots
Environment Information:
OS: Metal Devcloud accessed through Ubuntu 22.04 client
Device: Wormhole
Version of software: TT-Metal v0.51.0
Additional context
Use Tracy when checking. Also use L1 Buffer analyzer if available.
@athul-bos-semi The example seems to use deprecated tt_lib library. Which version of tt-metal are you using? It would be better to port it to the latest ttnn api.
I ported to v0.51.0, but ttnn.reshard does not work. The I ported it to main where reshard works, but convolution throws L1 memory error for code that was previously working.
but convolution throws L1 memory error for code that was previously working.
Can you please provide repro details on this? Since you say this regression is with the latest main version, we need to look into that issue first.
You can download both the files from here and run the test_unet.py file to reproduce the errors.
The main has already changed, now the error is different. Can you try and run these two files? I am now running on specifc versions to avoid confusion instead of running on main.
@athul-bos-semi please we need the updated code that is causing the recent issue.
I've uploaded the new code here that does not use tt-lib anywhere. Bilinear Upsampling issue seems to have solved itself when I switched to v0.52.0, and now the program goes all the way to Upsample 3, but fails due to a memory error. I am now working on fixing it.
Thanks for the update @athul-bos-semi. Can we close this then?
|
2025-04-01T04:35:40.758876
| 2024-11-01T20:03:14
|
2629785265
|
{
"authors": [
"ddilbazTT",
"nsmithtt",
"yugi957"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11352",
"repo": "tenstorrent/tt-metal",
"url": "https://github.com/tenstorrent/tt-metal/issues/14584"
}
|
gharchive/issue
|
embeddings_tilize.cpp ncrisc build failure [Bug Report]
Describe the bug
I am working on lowering gather op on stablehlo to embedding in ttnn. I wrote a test and ran it on device. The op fails in tt-metal. Here is my branch: https://github.com/tenstorrent/tt-mlir/compare/main...ddilbaz/gather Here is the error I see:
In file included from ../kernel_includes.hpp:1,
from /opt/ttmlir-toolchain/venv/lib/python3.10/site-packages/ttrt/runtime/tt_metal/hw/firmware/src/ncrisck.cc:20:
/opt/ttmlir-toolchain/venv/lib/python3.10/site-packages/ttrt/runtime/ttnn/cpp/ttnn/operations/embedding/device/kernels/dataflow/embeddings_tilize.cpp: In lambda function:
/opt/ttmlir-toolchain/venv/lib/python3.10/site-packages/ttrt/runtime/ttnn/cpp/ttnn/operations/embedding/device/kernels/dataflow/embeddings_tilize.cpp:112:46: error: 'token_idx' was not declared in this scope
112 | u.u = (uint32_t)input_l1_ptr[token_idx] << 16;
| ^~~~~~~~~
Always | FATAL | ncrisc build failed
Always | FATAL | Failed to generate binaries for embeddings_tilize TT_THROW @ /localdev/ddilbaz/tt-mlir/third_party/tt-metal/src/tt-metal/tt_metal/jit_build/build.cpp
:479: tt::exception
info:
ncrisc build failed
backtrace:
--- void std::vector<CoreRange, std::allocator<CoreRange> >::_M_realloc_insert<CoreRange const&>(__gnu_cxx::__normal_iterator<CoreRange*, std::vector<CoreRange, std::allocator<CoreRange
> > >, CoreRange const&)
So there are a few things I would like to point out:
Embedding op fails during compile when using bfp16 indices
ttnn/cpp/ttnn/operations/embedding/device/kernels/dataflow/embeddings_tilize.cpp is using an undefined variable
To Reproduce
You can take a look at my branch and pull it.
The way I tested running on device:
cmake -G Ninja -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=clang-17 -DCMAKE_CXX_COMPILER=clang++-17 -DTTMLIR_ENABLE_RUNTIME=ON -DTTMLIR_ENABLE_RUNTIME_TESTS=ON -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DTTMLIR_ENABLE_STABLEHLO=ON -DTT_RUNTIME_DEBUG=ON
cmake --build build
cmake --build build -- ttrt
ttrt query --save-artifacts
./build/bin/ttmlir-opt --ttir-load-system-desc="path=./ttrt-artifacts/system_desc.ttsys" --ttir-to-ttnn-backend-pipeline test/ttmlir/Silicon/TTNN/embedding/gather_to_embedding.mlir| ./build/bin/ttmlir-translate --ttnn-to-flatbuffer -o out.ttnn
ttrt read out.ttnn
ttrt run out.ttnn
test/ttmlir/Silicon/TTNN/embedding/gather_to_embedding.mlir contents:
// RUN: ttmlir-opt --ttir-to-ttnn-backend-pipeline="system-desc-path=%system_desc_path%" %s > %t.mlir
// RUN: FileCheck %s --input-file=%t.mlir
// RUN: ttmlir-translate --ttnn-to-flatbuffer %t.mlir > %t.ttnn
#any_device = #tt.operand_constraint<dram|l1|scalar|tile|any_device|any_device_tile>
module attributes {} {
func.func @forward(%operand: tensor<32000x1024xbf16>, %start_indices: tensor<1x32xbf16>) -> tensor<1x32x1024xbf16> {
// CHECK: %[[C:.*]] = "ttnn.empty"[[C:.*]]
%0 = tensor.empty() : tensor<1x32x1024xbf16>
// CHECK: %[[C:.*]] = "ttnn.embedding"(%start_indices, %operand, %0) <{operandSegmentSizes = array<i32: 2, 1>, operand_constraints = [#any_device, #any_device, #any_device]}> : (tensor<1x32xbf16>, tensor<32000x1024xbf16>, tensor<1x32x1024xbf16>) -> tensor<1x32x1024xbf16>
%1 = "ttir.gather"(%operand, %start_indices, %0) {
// Specify which dimensions in the output shape correspond to the slice dimensions
offset_dims = array<i64: 2>,
// Specify which dimensions should be collapsed/removed from the slice shape
collapsed_slice_dims = array<i64: 0>,
// Specify which dimensions in operand represent batches
operand_batching_dims = array<i64: 0>,
// Specify which dimensions in start_indices represent batches
start_indices_batching_dims = array<i64: 0>,
// Map from index vector components to input dimensions
start_index_map = array<i64: 0>,
// Which dimension in start_indices contains the gather indices
index_vector_dim = 1 : si64,
// Size of the slice to gather for each dimension
slice_sizes = array<i64: 1, 1024>,
// Whether indices are guaranteed to be sorted
indices_are_sorted = false,
// Any constraints on the operands (implementation specific)
operand_constraints = [#any_device, #any_device, #any_device]
} : (tensor<32000x1024xbf16>, tensor<1x32xbf16>, tensor<1x32x1024xbf16>) -> tensor<1x32x1024xbf16>
return %1 : tensor<1x32x1024xbf16>
}
}
Expected behavior
There needs to be some consistency with indices format. When using int32, it also fails
ERROR: test=out.ttnn experienced an error with exception="normal_kernel_cpu" not implemented for 'UInt32'
The indices data format should assert an error message before it reaches this level.
The undefined variable in ttnn/cpp/ttnn/operations/embedding/device/kernels/dataflow/embeddings_tilize.cpp should be resolved.
Assigning it to @ntarafdar because it looks related to data movements. I am not on tt-metal so I would appreciate your help with triaging.
@ntarafdar, a simpler repro on your end would probably be to just modify an embedding test to use bfp16 embedding indices.
I was able to reproduce the error using TTNN APIs. I used test_moe_embedding example in /tt-metal/tests/ttnn/unit_tests/operations/test_embedding.py By replacing
output_tensor = ttnn.embedding(input_tensor, weights, memory_config=output_mem_config, layout=ttnn.ROW_MAJOR_LAYOUT)
with
output_tensor = ttnn.embedding(input_tensor, weights, memory_config=output_mem_config, layout=ttnn.TILE_LAYOUT)
you can also reproduce the ncrisc error.
If TILE_LAYOUT is not supported, there should be an assert for that instead of failing with ncrisc.
Here is the error:
def __call__(self, *function_args, **function_kwargs):
> return self.function(*function_args, **function_kwargs)
E RuntimeError: TT_THROW @ /localdev/ddilbaz/tt-metal/tt_metal/impl/program/program.cpp:39: tt::exception
E info:
E Failed to generate binaries for embeddings_tilize TT_THROW @ /localdev/ddilbaz/tt-metal/tt_metal/jit_build/build.cpp:500: tt::exception
E info:
E ncrisc build failed
E backtrace:
E --- /localdev/ddilbaz/tt-metal/ttnn/ttnn/_ttnn.so(+0x78db60) [0x7f467f6bdb60]
E --- /localdev/ddilbaz/tt-metal/build_Release/lib/libtt_metal.so(+0x1a9289) [0x7f467ecff289]
E --- tt::tt_metal::JitBuildState::compile_one(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, tt::tt_metal::JitBuildSettings const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) const
E --- /localdev/ddilbaz/tt-metal/build_Release/lib/libtt_metal.so(+0x1025f3) [0x7f467ec585f3]
E --- /localdev/ddilbaz/tt-metal/build_Release/lib/libtt_metal.so(+0x10683a) [0x7f467ec5c83a]
E --- /localdev/ddilbaz/tt-metal/build_Release/lib/libtt_metal.so(+0x1053dd) [0x7f467ec5b3dd]
E --- /localdev/ddilbaz/tt-metal/build_Release/lib/libtt_metal.so(+0x1048e9) [0x7f467ec5a8e9]
E --- /localdev/ddilbaz/tt-metal/build_Release/lib/libtt_metal.so(+0x10477a) [0x7f467ec5a77a]
E --- /lib/x86_64-linux-gnu/libpthread.so.0(+0x8609) [0x7f473fcd7609]
E --- /lib/x86_64-linux-gnu/libc.so.6(clone+0x43) [0x7f473fe11353]
The int32 support coming later, and tokenidx has been removed.
|
2025-04-01T04:35:40.764897
| 2020-05-12T05:04:16
|
616361440
|
{
"authors": [
"awbirkner",
"johannesambrosch",
"tentone"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11353",
"repo": "tentone/nunuStudio",
"url": "https://github.com/tentone/nunuStudio/issues/383"
}
|
gharchive/issue
|
Render the background as transparent 0.9.6
Render the background as transparent
Description
Set transparent on scene object is not making the background transparent, but instead making it 000000
Version
0.9.6
Platform
Web Version [X]
Windows [X]
Linux [Unknown]
Hello
This is a solely a GUI bug it shows when you set to transparent is resets the GUI element to #000000 but it actually sets the scene to transparent.
If you export the project and show it in a webpage it will be transparent.
But i could integrate some transparency background patter in the editor to show this directly in the editor. Would that help visually?
Thanks a lot for your feedback.
Cheers
I'll chime in here as well, I've been trying to integrate AR.js into nunuStudio, and I have faced the same problem, for me the scene is rendered with a black background. (0.9.6 and windows)
Maybe my issue is with the render settings, although I've tried a lot of different settings, but is there a specific setup to make the scene actually render transparent?
Thanks for the help!
@johannesambrosch Hello
Use the latest web version where this feature has been implemented and you have a "set transparent" button in the scene configuration.
@tentone
I've tried that now too, still got a black canvas background :( Here's everything I've tried step by step:
Create new project, remove skybox, go to scene and hit "Set transparent"
Go to program, check "Alpha"
Go to program, check auto clear flags
Add a new camera, set as active camera, uncheck auto clear flags on program
Re-check clear flags on program and camera
I've published after every step to make sure, but I'm constantly getting a black background. Furthermore, calling app.program.renderer.getClearAlpha() in the browser console always returns 1.
Is there something I'm missing with the scene setup? I'm no three.js expert by any means, so I'm kinda stuck here.
|
2025-04-01T04:35:40.777115
| 2018-05-15T05:58:11
|
323076547
|
{
"authors": [
"lukeed",
"terkelg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11354",
"repo": "terkelg/tiny-glob",
"url": "https://github.com/terkelg/tiny-glob/pull/23"
}
|
gharchive/pull-request
|
Misc fixes
Fixes the sync version -- it wasn't updated w/ Windows & new globrex changes
Fixes the new "giveup" -- there was a typo & can't strict-compare regex against string
Added extra test for shits & giggles (because this was the use case that was failing for me)
globster ... nice one Terkel.
Thanks a lot Luke!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.