added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:28.530680
| 2016-11-24T18:39:11
|
191577296
|
{
"authors": [
"fthommen",
"jtroehr",
"mmokrejs"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10702",
"repo": "seqan/flexbar",
"url": "https://github.com/seqan/flexbar/issues/8"
}
|
gharchive/issue
|
cmake / compilation problems (wrong compiler used)
Trying to compile flexbar I am consistently running into compilation issues. I'm using -DCMAKE_CXX_FLAGS to point to the location of our TBB installation and to add the -std=c++14 to the compiler. However the compilation consistently fails and the problem seems to be, that the wrong compiler is uses:
$ which gcc
/ibios/tbi_cluster/13.1/x86_64/gcc/gcc-6.x/bin/gcc
$ which c++
/ibios/tbi_cluster/13.1/x86_64/gcc/gcc-6.x/bin/c++
$
but
$ cmake -DCMAKE_PREFIX_PATH=/tmp/flexbar-test -DCMAKE_CXX_FLAGS="-I/ibios/tbi_cluster/13.1/x86_64/tbb/tbb-2017.1/include" DCMAKE_C_COMPILER=`which gcc` .
-- The C compiler identification is GNU 4.8.1
-- The CXX compiler identification is GNU 4.8.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Performing Test COMPILER_SUPPORTS_CXX11
-- Performing Test COMPILER_SUPPORTS_CXX11 - Success
-- Flexbar 64 bit architecture
-- Found ZLIB: /usr/lib64/libz.so (found version "1.2.8")
-- Found BZip2: /usr/lib64/libbz2.so (found version "1.0.6")
-- Looking for BZ2_bzCompressInit in /usr/lib64/libbz2.so
-- Looking for BZ2_bzCompressInit in /usr/lib64/libbz2.so - found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/thommen/projects/flexbar/flexbar-2.7.0
$
Note the wrong - because too old - compiler selected by cmake (/usr/bin/cc, /usr/bin/c++). How can this be fixed?
frank
Note the wrong - because too old - compiler selected by cmake (/usr/bin/cc, /usr/bin/c++). How can this > be fixed?
Maybe try this one?
CC=/ibios/tbi_cluster/13.1/x86_64/gcc/gcc-6.x/bin/gcc CXX=/ibios/tbi_cluster/13.1/x86_64/gcc/gcc-6.x/bin/c++ cmake -DCMAKE_PREFIX_PATH=/tmp/flexbar-test -DCMAKE_CXX_FLAGS="-I/ibios/tbi_cluster/13.1/x86_64/tbb/tbb-2017.1/include" -DCMAKE_C_COMPILER=`which gcc`
Yes, I also see:
cd /tmp/flexbar-2.7.0_build/src && /usr/bin/x86_64-pc-linux-gnu-g++ -DSEQAN_HAS_BZIP2=1 -DSEQAN_HAS_ZLIB=1 -I/tmp/flexbar-2.7.0/include -I/usr/include/seqan-2.2 -DNDEBUG -O2 -pipe -maes -mpclmul -mpopcnt -mavx -march=native -std=c++11 -o CMakeFiles/flexbar.dir/Flexbar.cpp.o -c /tmp/flexbar-2.7.0/src/Flexbar.cpp
In file included from /usr/include/seqan-2.2/seqan/basic.h:42:0,
from /tmp/flexbar-2.7.0/src/Flexbar.h:17,
from /tmp/flexbar-2.7.0/src/Flexbar.cpp:22:
/usr/include/seqan-2.2/seqan/platform.h:155:6: error: #error SeqAn requires C++14! You must compile your application with -std=c++14, -std=gnu++14 or -std=c++1y.
#error SeqAn requires C++14! You must compile your application with -std=c++14, -std=gnu++14 or -std=c++1y.
but adding -std=c++14 does not help because cmake anyway adds afterwards its own -std=c++11
Is this issue still prevalent, also with version 3.0 of Flexbar?
|
2025-04-01T04:35:28.540475
| 2016-02-23T12:56:52
|
135729777
|
{
"authors": [
"akanto",
"jenkins-sequenceiq",
"mhmxs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10703",
"repo": "sequenceiq/cloudbreak",
"url": "https://github.com/sequenceiq/cloudbreak/pull/1335"
}
|
gharchive/pull-request
|
CLOUD-53131 Replace SubscriptionAlreadyExistException to logging
@akanto plz review
LGTM
Jenkins build finished, all tests passed.
Refer to this link for build results: http://ci.sequenceiq.com/job/cloudbreak-pull-request/2137/
|
2025-04-01T04:35:28.544427
| 2016-05-02T06:57:18
|
152491940
|
{
"authors": [
"antrik",
"erickt"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10704",
"repo": "serde-rs/quasi",
"url": "https://github.com/serde-rs/quasi/pull/41"
}
|
gharchive/pull-request
|
Update for latest libsyntax changes
Adapt to the changed interfaces, and bump the version number. (And of course also bump the dependencies on syntex and aster to enable the update.)
This one depends on https://github.com/serde-rs/aster/pull/78 and https://github.com/serde-rs/syntex/pull/44
(I.e. CI will also fail until the new dependencies are publised.)
Thanks again! I'll wait for everything to go green on travis before I release.
|
2025-04-01T04:35:28.547302
| 2017-11-03T16:06:42
|
271031545
|
{
"authors": [
"dtolnay",
"epage"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10705",
"repo": "serde-rs/serde",
"url": "https://github.com/serde-rs/serde/issues/1078"
}
|
gharchive/issue
|
Lifetime errors with skipped &'static variable
I needed to evolve my format, so I went from (simplified)
#[derive(Debug, PartialEq)]
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct SassOptions {
#[serde(skip)]
pub import_dir: &'static str,
pub style: SassOutputStyle,
}
impl Default for SassOptions {
fn default() -> SassOptions {
...
}
}
#[derive(Debug, PartialEq)]
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields, default)]
pub struct ConfigBuilder {
#[serde(skip)]
pub root: path::PathBuf,
pub source: String,
pub destination: String,
pub sass: sass::SassOptions,
}
to
#[derive(Debug, PartialEq)]
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct SassOptions {
#[serde(skip)]
pub import_dir: &'static str,
pub style: SassOutputStyle,
}
impl Default for SassOptions {
fn default() -> SassOptions {
...
}
}
#[derive(Debug, PartialEq, Default)]
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct AssetsBuilder {
pub sass: sass::SassOptions,
}
#[derive(Debug, PartialEq)]
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields, default)]
pub struct ConfigBuilder {
#[serde(skip)]
pub root: path::PathBuf,
pub source: String,
pub destination: String,
pub assets: AssetsBuilder,
}
and I started getting lifetime errors
error[E0495]: cannot infer an appropriate lifetime for lifetime parameter `'de` due to conflicting requirements
--> src/config.rs:95:21
|
95 | #[derive(Serialize, Deserialize)]
| ^^^^^^^^^^^
|
note: first, the lifetime cannot outlive the lifetime 'de as defined on the impl at 95:21...
--> src/config.rs:95:21
|
95 | #[derive(Serialize, Deserialize)]
| ^^^^^^^^^^^
note: ...so that types are compatible (expected std::convert::From<<__A as serde::de::MapAccess<'_>>::Error>, found std::convert::From<<__A as serde::de::MapAccess<'de>>::Error>)
--> src/config.rs:95:21
|
95 | #[derive(Serialize, Deserialize)]
| ^^^^^^^^^^^
= note: but, the lifetime must be valid for the static lifetime...
note: ...so that types are compatible (expected serde::Deserialize<'_>, found serde::Deserialize<'static>)
--> src/config.rs:95:21
|
95 | #[derive(Serialize, Deserialize)]
| ^^^^^^^^^^^
= note: this error originates in a macro outside of the current crate
error: aborting due to 2 previous errors
error: Could not compile `cobalt-bin`.
Situations
Adding a layer caused the warning
Switching import_dir to String removed the error
Adding a member to AssetsBuilder did not fix things.
Original source: https://github.com/cobalt-org/cobalt.rs
Broken source: not uploaded at the moment
Broken source: https://github.com/epage/cobalt.rs/tree/serde-broken
Thanks for the detailed report! This is a bug. I have a fix in #1079 and will try to get a release out in a few minutes.
I released serde_derive 1.0.18 with a fix and confirmed that your code compiles.
Thanks!
|
2025-04-01T04:35:28.548744
| 2018-08-11T04:18:48
|
349704389
|
{
"authors": [
"Lokathor",
"pickfire"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10706",
"repo": "serde-rs/serde",
"url": "https://github.com/serde-rs/serde/issues/1352"
}
|
gharchive/issue
|
Please add support for NonZeroU128
It seems that NonZeroU128 isn't supported, even though NonZeroU64 is.
@Lokathor Maybe it might be useful to check https://github.com/serde-rs/serde/issues/1136, serde support for rustc minimum version 1.13+ does not have 128 bits integer.
Yeah, seems they used a special build setup that detects the minimum rustc version and then throws in the extra impl if rustc is high enough version. I guess the same could be done for this type as well.
|
2025-04-01T04:35:28.552782
| 2023-11-19T00:02:41
|
2000626376
|
{
"authors": [
"Kyuuhachi",
"Mingun"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10707",
"repo": "serde-rs/serde",
"url": "https://github.com/serde-rs/serde/issues/2651"
}
|
gharchive/issue
|
The Stunt Double Problem
It is very common that a derived De/Serialize implementation is almost what you want, but not quite — for example doing manual processing if the input is a string, but use the derived if it's a map/struct; or doing post-processing to fill in a skipped field based some other fields. In cases like these, you want the derived functions so that you can call them, but you do not want the derived trait impls because you want to implement them yourself.
Remote derives are useful for this. In particular, #[serde(remote = "Self")] places the functions as inherent on the type instead of in the trait impl, which sounds like exactly what we want... Except they're pub, and they shadow the functions from the trait. This means that calling MyType::deserialize(des) will call the inherent (derived) function instead of the trait (customized) one. Oops!
The common solution, as far as I know, is to create an exact copy of the type, which I like to call a "stunt double", which you derive the traits on (tagged with remote = "MyType" to skip manual conversion). This causes a burden on maintenance, since the two types need to be kept in sync[^sync], in addition to being a pain to write and read.
<flawed proposal> All it would take to solve this is some customization over where the remote functions are placed. My suggestion would be that in addition to allowing remote to take a type, it would also accept mod module_name, with an optional visibility specifier. This would behave similarly to remote = "Self", except the functions are placed inside a newly created module instead of an inherent impl on the type itself. </flawed proposal>
As I was writing the above paragraph, I realized that it won't be that easy — the serialize and deserialize derives are run separately, so it'd try to create the module twice. Gonna need a more complex solution, I guess.
[^sync]: The compiler will catch you if you forget to sync a struct field, but adding an enum variant without updating the stunt double is a bug that can easily go uncaught for a while.
Why you want to specify module? It seems that just customization of visibility will be enough for your use case. Maybe also add an ability to customize generated name.
Non-pub would still shadow the trait functions inside the local, though. Not as bad as having it public, but still a bit of an unnecessary risk. Customizing name plus visibility would probably be sufficient, yeah.
Nov 19, 2023 12:27:12 Mingun @.***>:
Why you want to specify module? It seems that just customization of visibility will be enough for your use case. Maybe also add an ability to customize generated name.
—
Reply to this email directly, view it on GitHub[https://github.com/serde-rs/serde/issues/2651#issuecomment-1817826075], or unsubscribe[https://github.com/notifications/unsubscribe-auth/AALZWNXEV5PACJR35EC7PNLYFHUI7AVCNFSM6AAAAAA7RJX6S2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJXHAZDMMBXGU].
You are receiving this because you authored the thread.
[Tracking image][https://github.com/notifications/beacon/AALZWNVTKQZE7IEMINYI643YFHUI7A5CNFSM6AAAAAA7RJX6S2WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTMLHJRW.gif]
|
2025-04-01T04:35:28.561912
| 2023-07-27T10:16:24
|
1824063573
|
{
"authors": [
"ZJaume",
"serega"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10708",
"repo": "serega/gaoya",
"url": "https://github.com/serega/gaoya/issues/26"
}
|
gharchive/issue
|
Indexing without storing signatures
Hi,
First of all many thanks for this library, it is helping me a lot. I'm using it to do neardedup with MinHash of very large collections of text (tens of TB or even a hundred TB, compressed size) and I'm constrained by the amount of RAM available. So, I'm doing some modifications to address this. The first one, was to have index objects that store only one of the bands, so I could do distributed index in different machines. But now I'm wondering if it would be possible to avoid storing all the signatures in id_signatures: HashMap. Therefore only storing ids. As far as I understood from the code, to be able to query a document and return matches, band is needed and the id_signatures would only be needed if return similarity is requested or if I need to do queries by id. Am I right?
Not asking for you to implement it, just wanted to double check if this is feasible.
Thanks in advance,
Jaume
Hello Jaume. Glad gaoya is helping you.
I am using gaoya myself in production, but my data sets of smaller. They fit in few tens of GB. If you do not need similarity and do not query by id and you are ok with a small percentage of false positives then it is OK to have a distributed index. id_signatures also used to filter candidates returned from bands by the threshold. See here and here
Each band stores only a small portion of the full minhash signature. It is possible to have two very different signatures that would hash into exactly same location in the band. datasketch - another very popular minhash implementation does not store full signatures, which may result in false positives. See comment from the author here and here
I do not have any data to tell the percentage of false positives, but it is small. MinHashing itself is a probabilistic algorithm, so even with full signatures there can be false positives.
Short answer is - yes, it is feasible
Many thanks for your explanation! I will probably explore this possibility.
Hi again,
I finally implemented distributed index, storage without signatures and compute connected components of duplicates with Union-Find.
Basically my implementation is doing the same as text-dedup but with Rsut and Gaoya.
With it, I was able to deduplicate 13B documents of English with 1TB RAM nodes for the latest HPLT release.
Probably my code is breaking a little bit the design of the library (I'm not well versed in Rust) or going out of its scope, but if there is any change that you are interested in, I'm happy to contribute.
Hi @ZJaume. I am not sure where you committed. Do you have a fork that is public ?
Sorry, I don't know why the links were wrong. The commits are in my fork, large_dedup branch:
https://github.com/zjaume/gaoya/commit/d0c54b393f7183fbee9f1e0a61c11c081d9cad6a
and
https://github.com/zjaume/gaoya/commit/ebeb181882b8af5783772338ce0f49083d810ec4
Hi @ZJaume . I finally found time to look at your branch. You may have noticed I have clustering folder with a clustering algorithm implemented. In fact, I use a variation of clusterer_parallel.rs with great success. The algorithm proceeds through all points and calls query on the index to find matches.
If I understand correctly the idea of Union-Find is to take advantage of the MinHash index creates a structure that sort of pre-clusters data. Every point is stored in every band b0, b1,b2, ..bB. All points in a single band entry are part of the same cluster by the construction. We start with band b0 and entry b0[0], which may have points [p0, p3, p19]. These three points belong the same cluster. The point p0 may be stored alone in B1 as B1[8]: [p0], and stored in b2 as b2[5]: [p0, p9], so we add p9 the cluster. The cool thing is that we don't have to chase points. Instead we just go though all tables and call Union-Find on every point.
I think asymptotically both algorithms are very similar. Correct me if I am wrong in my reasoning. Let's take the worst case and use a dataset of size N with no duplicates. We construct the index with B bands with N entries in each band. The naive algorithm I use would iterate through each point p0, p1,.. pN and the corresponding signatures and do a lookup in every band with a portion of the signature. The running time is N * B.
The UnionFind algorithm goes through every band and calls Union-Find on every point, so the running time is B * N. UnionFind should be faster in practice because B * N hash map lookups are more expensive that iteration. UnionFind algorithms also avoids constructing BandKey. It would be interesting to benchmark both algorithms.
The main advantage of UnionFind for distributed clustering is there is no need to store signatures in one place. However, as already mentioned ignoring full signatures may result in a higher false positive rate, but checking against full signatures can be done with post processing.
I am would be happy to have UnionFind algorithm implemented in Gaoya, as long as it is done in a non-intrusive manner, and optional. I don't know though if UnionFind would be more efficient for my use-case. I use Gaoya for continuous clustering of streaming data. MinHashIndex with HashSetContainer is being mutated as new points added to the index, and removed when a cluster is found. I do not cluster the whole dataset on every iteration, and use new points as entry points plus small percentage of random points to the clustering algorithm.
A random idea came to me. To deduplicate a very large dataset using MinHash there is no need a 1TB of RAM. The find_clusters function iterates though all bands in sequential order, so only one band is needed at a time. So, I think it is possible to proceed as follows in pseudocode
Step 1 - create minhash signatures and store them in a file in some format
for (doc_id, doc) in documents
signature = create_signature(doc);
write_signature_to_file((doc_id, signature));
Step 2 - create bands one at a time and store band on disk
for b in 0..num_bands:
band = create_band()
for (doc_id, signature) in read_doc_id_signatures_from_file():
band.insert(doc_id, signature)
write_band_on_disk(band);
Step 3 - iterate through every band and and call union find operation on every item
for band in bands:
for entry in read_band_from_disk(band):
union_find_entry(entry)
I didn't specify how to store the data on disk, but I don't think it matters much. With serde it is very easy to serialize and deserialize data in virtually any format. bincode works very well.
So, instead of using a machine with 1T RAM it would be possible to deduplicate on a machine with a small fraction of RAM and a bigger disk. As I am writing this I think steps 2 and 3 can be done in a sequential order one band a time.
union_find = UnionFind()
for b in 0..num_bands:
band = new_band()
for (doc_id, signature) in read_doc_id_signatures_from_file():
band.insert(doc_id, signature)
for entry in band:
union_find.union(entry)
For one time deduplication there is no need to keep all bands around. Working with disk files would be slower than working with RAM, but it would be much more cost effective.
I have just found your project. The original links you gave me were not correct
|
2025-04-01T04:35:28.567834
| 2022-09-02T17:36:02
|
1360456321
|
{
"authors": [
"codenoid",
"pjbh",
"serengil"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10709",
"repo": "serengil/deepface",
"url": "https://github.com/serengil/deepface/issues/551"
}
|
gharchive/issue
|
Supress face detector output?
Thank you so much for this amazing piece of work. I downloaded the latest version to get the newly added models. Now the face detectors give an output, a lot of it in the case of mtcnn, which they did not do before. Is there a way to tell them not to?
Thanks, Peter
what do you mean exactly? can you share your program and output sample?
DeepFace.verify(filelist[0], filelist[1], model_name = "Facenet")
1/1 [==============================] - 1s 887ms/step
1/1 [==============================] - 0s 39ms/step
It used not to give this output
You can use this arg: https://github.com/serengil/deepface/blob/master/deepface/DeepFace.py#L95
Like this? Doesn't seem to have any effect
for file in filelist[1:len(filelist)]:
result = DeepFace.verify(filelist[0], file, model_name = "ArcFace", prog_bar = False)
print(file + ' ' + str(result["verified"]) + ' ' + str(result["distance"]))
1/1 [==============================] - 0s 63ms/step
1/1 [==============================] - 0s 56ms/step
C:/Users/pjbh2/anaconda3/envs/deepface1\AnnaFriel0001.jpg True 0.376147657150524
interesting because it should work.
what is the content of file list?
Just a few face jpg files, to test things out, so I can see what happens with different models and detectors. It's no biggie for me, just clutters up the output and used not to happen. I still have the old environment if I don't want the new models (Facenet512 appears to give identical results to Facenet, anyhow). To try this version, I set up a fresh env in Anaconda, using Python 3.9.13
if you pass just an image to verify function, then progress bar is silent. i think your trouble is coming from a dependency such as tqdm version. i am closing this issue because i cannot re-produce it.
i had this issue in tensorflow 2.10.0 and i could not resolve it
when i downgrade tensorflow to 2.7.4, then this is sorted.
this is weird?, I'm on tensorflow==2.15.0.post1 this log are still open
https://github.com/ipazc/mtcnn/issues/121
I found the solution
import keras
keras.utils.disable_interactive_logging()
anywhere on the python project (yours or the mtcnn lib)
https://github.com/ipazc/mtcnn/issues/121#issuecomment-1942708344
|
2025-04-01T04:35:28.596315
| 2015-04-24T12:40:03
|
70680275
|
{
"authors": [
"bipinmaurya",
"sergi"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10710",
"repo": "sergi/jsftp",
"url": "https://github.com/sergi/jsftp/issues/130"
}
|
gharchive/issue
|
ReferenceError: require is not defined(for var Net = require('net') and var JSFtp= require("jsftp"))
I had downloaded the 'jsftp-master.zip' file and extracted the zip file using jsftp.js file in my existing web application.From one of the jsp file I am calling one javascript method mentioned below,
..................................................
var JSFtp = require("jsftp");
function uploadFiles(){
var sourcefile="C:/Users/IBM_ADMIN/Documents/SametimeFileTransfers/Evergreen Hits Of Laxmikant Pyarelal.7z";
var destDir="/rchgsa-p1/03/fmtprjct/uploadtest";
var server="abcd.com";
var user="abc";
var port=21;
var passwrod="pqr";
var ftpServer = new JSFtp({
host:server,
user:user,
port:port,
pass:passwrod,
debugMode:true
});
ftpServer.auth(user, passwrod, function(hadErr) {
if (!hadErr)
alert("auth succesfull")
});
ftpServer.put(sourcefile, destDir, function(hadError) {
if (!hadError)
console.log("File transferred successfully!");
});
}
....................................................
While running in firefox browser I am getting error
ReferenceError: require is not defined for var net = require('net')
I am very much new to this api, can any one please help me to setup this api. I felt that It required some more dependent library but I don't know what are the library needed and from where I can download them.
Thanks in advance.
Hi @bipinmaurya , jsftp is built to be run in the server using Node.js, not in the browser.
|
2025-04-01T04:35:28.615025
| 2023-07-18T14:44:59
|
1810086267
|
{
"authors": [
"GazHank",
"lionelains",
"santigimeno",
"toptensoftware"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10711",
"repo": "serialport/node-serialport",
"url": "https://github.com/serialport/node-serialport/issues/2656"
}
|
gharchive/issue
|
Port not closed, process doesn't exit on OpenSuse Tumbleweed with Node v20
SerialPort Version
v9 and v11 (at least)
Node Version
v20.3.1
Electron Version
n/a
Platform
OpenSUSE Tumbleweed: Linux localhost.localdomain 6.4.3-1-default #1 SMP PREEMPT_DYNAMIC Tue Jul 11 06:23:11 UTC 2023 (5ab030f) x86_64 x86_64 x86_64 GNU/Linux
Architecture
x64
Hardware or chipset of serialport
FT232RL and cp210x
What steps will reproduce the bug?
See sample program below using Serial Port v11.
First problem:
Run program as shown on OpenSusu Tumbleweed with node v20 and the process never exits
Removing either the data handler section or the drain section allows the process to exit
Problem doesn't happen on Ubuntu with Node v20 or OpenSusu Tumbleweed with Node v18
Problem happens with Serial Port v9.0.4 and v11.0.0
Second problem: (probably related), attempting to reopen the serial port after closing it fails with "[Error: Error Resource temporarily unavailable Cannot lock port]".
let { SerialPort } = require('serialport');
function open(device, baud)
{
let serialPortOptions = {
dataBits: 8,
stopBits: 1,
parity: 'none',
baudRate: baud,
path: device,
};
return new Promise((resolve, reject) => {
let port = new SerialPort(serialPortOptions, function(err) {
if (err)
reject(err);
else
resolve(port);
});
});
}
function drain(port)
{
// Drain port
return new Promise((resolve, reject) => {
port.drain((function(err) {
if (err)
reject(err);
else
resolve();
}));
});
}
function close(port)
{
return new Promise((resolve, reject) => {
port.close(function(err) {
if (err)
reject(err);
else
resolve();
});
})
}
function onDataHandler(data)
{
console.log("On Data");
}
(async function() {
// Open port
console.log("Opening...");
let port = await open("/dev/ttyUSB0", 115200);
console.log("OK");
// Data handler
port.on('data', onDataHandler);
port.off('data', onDataHandler);
// Drain
console.log("Draining");
await drain(port);
console.log("OK");
// Close
console.log("Closing");
await close(port);
console.log("OK");
// PROBLEM 1: Process never exits.
// Remove "data handler" or "drain" calls above, problem doesn't happen
// PROBLEM 2: (possibly same issue) attempting to re-open the port fails with
// "[Error: Error Resource temporarily unavailable Cannot lock port]"
// Open again...
/*
console.log("Opening (again)...");
port = await open("/dev/ttyUSB0", 115200);
console.log("OK");
*/
})();
What happens?
Process didn't exit even though all event handlers removed, all completion callbacks invoked and all promises resolved.
Port seems to be left open and can't be re-opened.
What should have happened?
Process should exit
Port should be able to be re-opened.
Additional information
As noted, doesn't happen on Ubuntu and doesn't happen pre-node v20.
Hi @toptensoftware
When you mention that this issue doesn't appear on Ubuntu, would I be correct to infer that you are using the LTS release (22.04)?
If so, I wonder if this is down to some recently introduced incompatibility between one of the distro packages, and one of the node js changes or dependencies?
Would it be possible to run some extra tests to narrow down the specific change which causes the incompatibility? for example to see if the issue also shows up on the bleeding edge versions of ubuntu?
Looking at the recent changes on both the opensuse tumbleweed and node v20, I wonder if it could be due to changes to systemd (249 vs 252) and/or libuv (1.44 vs 1.46) - however this is pure speculation at this point, and really needs investigation.
Good point re: Ubuntu version. I think I tested on Ubuntu 20.04, so not sure about 22.04. I'll setup a VM and let you know.
Same issue happens on Ubuntu 22.04, node v20
Doesn't happen on Ubuntu 22.04, node v16 (didn't test v18, I presume it's fine)
System info below for reference.
I hope this helps.
brad@lin:~$ uname -a
Linux lin 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
brad@lin:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
I think I can replicate this issue with node js 20.3.0; but not in 20.2.0
I fear this going to be due to node JS updating to libuv 1.45, and in particular the changes to introduce io_uring support
I'll try to investigate further as time permits
I can confirm that for me the issue only appears when the libuv dependency within Nodejs is upgraded from 1.44 to 1.45. This is based on using serialport 11 and ubuntu 22.04.2 LTS ( 5.19.0-46-generic ). Using the same versions of node and serialport don't show any regressions on windows, so I'm mainly focusing on the unix specific changes as the likely root cause.
I'll continue to look into this when I can, but hopefully this info will also help anyone else who is able to dig into the libuv changes and identify a fix.
I'm afraid the best workaround I can suggest at this time is to revert to node 20.2.0 or the LTS release (v18) until this issue is resolved
Thanks so much for looking into this. Hopefully this can get resolved quickly.
I'm investigating this. Can you post the output using DEBUG=* when using 20.2.0? Thanks!
See below for logs from 20.2 (works) and 20.4 (doesn't work). Use nvm to switch editions, running on Ubuntu 22.04.1 LTS
Node v20.2:
brad@lin:~/flashy/flashy$ DEBUG=* node flashy /dev/ttyUSB0 --status
serialport/bindings loading LinuxBinding +0ms
Opening /dev/ttyUSB0 at 115,200 baud... serialport/stream opening path: /dev/ttyUSB0 +0ms
serialport/binding-abstract open +0ms
ok
Waiting for device... serialport/stream _read queueing _read for after open +1ms
serialport/bindings/poller Creating poller +0ms
serialport/stream opened path: /dev/ttyUSB0 +1ms
serialport/stream _write 12 bytes of data +0ms
serialport/binding-abstract write 12 bytes +3ms
serialport/stream _read reading { start: 0, toRead: 65536 } +1ms
serialport/binding-abstract read +1ms
serialport/bindings/unixWrite Starting write 12 bytes offset 0 bytesToWrite 12 +0ms
serialport/bindings/unixRead Starting read +0ms
serialport/bindings/unixWrite write returned: wrote 12 bytes +0ms
serialport/bindings/unixWrite Finished writing 12 bytes +0ms
serialport/stream binding.write write finished +2ms
serialport/stream drain +0ms
serialport/binding-abstract drain +1ms
serialport/bindings/unixRead read error [Error: EAGAIN: resource temporarily unavailable, read] {
errno: -11,
code: 'EAGAIN',
syscall: 'read'
} +1ms
serialport/bindings/unixRead waiting for readable because of code: EAGAIN +0ms
serialport/bindings/poller Polling for "readable" +3ms
serialport/stream binding.drain finished +8ms
serialport/bindings/poller received "readable" +11ms
serialport/bindings/unixRead Starting read +11ms
serialport/bindings/unixRead Finished read 52 bytes +0ms
serialport/stream binding.read finished { bytesRead: 52 } +3ms
ok
Found device:
- Raspberry Pi 3 Model B+
- Serial: 00000000-bc1b1209
- CPU Clock: 600MHz (range: 600-1400MHz)
- Bootloader: rpi2-aarch32 v2.0.15, max packet size: 4096
Closing serial port... serialport/stream drain +1ms
serialport/binding-abstract drain +12ms
serialport/stream _read reading { start: 52, toRead: 65484 } +1ms
serialport/binding-abstract read +1ms
serialport/bindings/unixRead Starting read +2ms
serialport/stream binding.drain finished +0ms
serialport/stream #close +0ms
serialport/binding-abstract close +0ms
serialport/bindings/poller Stopping poller +2ms
serialport/bindings/poller Destroying poller +0ms
serialport/bindings/unixRead read error [Error: EAGAIN: resource temporarily unavailable, read] {
errno: -11,
code: 'EAGAIN',
syscall: 'read'
} +0ms
serialport/stream binding.read error Error: Port is not open
at unixRead (/home/brad/flashy/flashy/node_modules/@serialport/bindings/lib/unix-read.js:33:21) {
canceled: true
} +0ms
serialport/stream _read queueing _read for after open +0ms
serialport/stream binding.close finished +2ms
ok
brad@lin:~/flashy/flashy$
Node v20.4:
brad@lin:~/flashy/flashy$ DEBUG=* node flashy /dev/ttyUSB0 --status
serialport/bindings loading LinuxBinding +0ms
Opening /dev/ttyUSB0 at 115,200 baud... serialport/stream opening path: /dev/ttyUSB0 +0ms
serialport/binding-abstract open +0ms
ok
Waiting for device... serialport/stream _read queueing _read for after open +2ms
serialport/bindings/poller Creating poller +0ms
serialport/stream opened path: /dev/ttyUSB0 +1ms
serialport/stream _write 12 bytes of data +0ms
serialport/binding-abstract write 12 bytes +3ms
serialport/stream _read reading { start: 0, toRead: 65536 } +0ms
serialport/binding-abstract read +1ms
serialport/bindings/unixWrite Starting write 12 bytes offset 0 bytesToWrite 12 +0ms
serialport/bindings/unixRead Starting read +0ms
serialport/bindings/unixWrite write returned: wrote 12 bytes +1ms
serialport/bindings/unixWrite Finished writing 12 bytes +0ms
serialport/stream binding.write write finished +2ms
serialport/stream drain +0ms
serialport/binding-abstract drain +1ms
serialport/stream binding.drain finished +5ms
serialport/bindings/unixRead Finished read 52 bytes +8ms
serialport/stream binding.read finished { bytesRead: 52 } +3ms
ok
Found device:
- Raspberry Pi 3 Model B+
- Serial: 00000000-bc1b1209
- CPU Clock: 600MHz (range: 600-1400MHz)
- Bootloader: rpi2-aarch32 v2.0.15, max packet size: 4096
Closing serial port... serialport/stream drain +1ms
serialport/binding-abstract drain +9ms
serialport/stream _read reading { start: 52, toRead: 65484 } +0ms
serialport/binding-abstract read +0ms
serialport/bindings/unixRead Starting read +2ms
serialport/stream binding.drain finished +1ms
serialport/stream #close +0ms
serialport/binding-abstract close +1ms
serialport/bindings/poller Stopping poller +12ms
serialport/bindings/poller Destroying poller +0ms
serialport/stream binding.close finished +0ms
ok
(Note: didn't return to command prompt with 20.4)
Thanks for the logs.
I've been looking into this and I'm a bit confused. By running the example in 20.2 with strace I can see that the library opens the device in non-blocking mode
openat(AT_FDCWD, "/dev/pts/7", O_RDWR|O_NOCTTY|O_NONBLOCK|O_SYNC|O_CLOEXEC) = 20
By looking at the code, it seems uv_poll_t is used to poll for events in that specific fd.
But there's an initial read() on the device, which I guess it's triggered by the data listener, without having received the EPOLLIN event which of course returns EAGAIN as it's non-blocking:
read(20, 0x58f20c0, 65536) = -1 EAGAIN (Resource temporarily unavailable)
Why is this read() called directly instead of waiting for the POLLIN event from uv_poll_t handle? I'm saying this because that, apparently, unnecessary extra-read is causing the issue: the uv_fs_t read request which wraps the read() syscall in v20.2.0 completes with EAGAIN whereas in v20.4.0 doesn't because the underlying io_uring implementation is waiting for some data to be available before returning.
Why is this read() called directly instead of waiting for the POLLIN event from uv_poll_t handle?
I guess this is a question for the serialport devs? @GazHank is this something you can answer/look into?
The initial read is getting queued due to @serialport/stream in system independent logic... but I'm on Windows at the moment and this example doesn't result in the read getting queued up...
https://github.com/serialport/node-serialport/blob/b2405f7998938140255307924d624f32c3454b6e/packages/stream/lib/index.ts#L278-L285
Once I can get back to a Linux box I think I'll try to run some check on older / LTS kernel versions to see if we get the same behaviour
Hi.
I'm facing the same issue here when running the following sample script using nodejs v20.4.0 on Ubuntu 23.10:
const {SerialPort} = require('serialport');
const f = async () => {
async function connect(sp) {
await new Promise((resolve, reject) => {
sp.open((error) => {
if (error) {
const err = new Error(`Error opening port: (${error.message})`);
return reject(err);
}
return resolve();
});
});
}
let sp;
sp = new SerialPort({ path:'/dev/ttyUSB1', baudRate: 115200, autoOpen: false});
sp.on('data', ()=>{}); // 1st data receiver
await connect(sp);
sp.close();
sp.removeAllListeners();
sp = new SerialPort({ path:'/dev/ttyUSB1', baudRate: 115200, autoOpen: false});
sp.on('data',()=>{}); // 2nd data receiver
await connect(sp);
sp.close();
sp.removeAllListeners();
}
f();
Note
When removing both lines sp.on(...) for 1st and 2nd data receiver, there is no issue anymore.
When removing only the 1st data receiver, execution either succeeds or hangs forever.
When keeping both lines like in the above example, or when removing only the 2nd data receiver, execution always fails with Resource temporarily unavailable Cannot lock port
The output when executing the above script with nodejs v20.4.0:
$ node test.js
test.js:9
const err = new Error(`Error opening port: (${error.message})`);
^
Error: Error opening port: (Error Resource temporarily unavailable Cannot lock port)
at SerialPort.<anonymous> (test.js:9:33)
at SerialPort._error (node_modules/@serialport/stream/dist/index.js:82:22)
at node_modules/@serialport/stream/dist/index.js:118:18
Node.js v20.4.0
Running the very same script with nodejs v20.2.0 does not lead to any error.
It looks like this problem is related to libuv, and maybe specifically to the use of the Linux kernel's facility io_uring instead of epoll for async operations that happened since this commit.
Indeed, one workaround is to switch back to the old behaviour in libuv using UV_USE_IO_URING. The following now works consistently for me in all cases (no hang, no error):
$ UV_USE_IO_URING=false node test.js
In the nodejs v20.2.0 I'm using, the embedded libuv is:
$ npm version | grep 'uv:'
uv: '1.44.2',
And in nodejs v20.4.0, it is:
$ npm version | grep 'uv:'
uv: '1.46.0',
|
2025-04-01T04:35:28.662708
| 2020-10-22T17:54:59
|
727601474
|
{
"authors": [
"LCaparelli",
"tsurdilo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10713",
"repo": "serverlessworkflow/specification",
"url": "https://github.com/serverlessworkflow/specification/issues/155"
}
|
gharchive/issue
|
Add mechanism to control non-determinism in retries
What would you like to be added:
On the retry definition there is an option to increase the time period to be waited between attempts ("multiplier"). Many exponential backoff implementations also make use of a (usually small) bounded random amount of time that is added to the delay.
It would be nice if this bounded random amount of time (sometimes referred to as "jitter") was part of the spec, it can be very important to certain workloads and IMO it's better that this is enforced by the specification than to be left up to the implementation.
This parameter could be of type integer and be measured in milliseconds. The delay for the n-th attempt could be defined as: interval * exp(multiplier, n-1) + rand(0, jitter), though I believe the exact formula could be left up to the implementation. The key point here is enforcing a mechanism to provide bounded randomness.
Why is this needed:
This is important in scenarios where the transient failure is a result of a race condition (for example) between two actions which could be, by bad luck, causing the failure and waiting the same amount of time before retrying and causing the failure again, only to wait for the same amount of time again (in this scenario, their "interval" and "multiplier" would be the same).
I can help coming up with a proper definition (would love to), but first I wanted to to know your thoughts on this. :-)
@LCaparelli +1 yes please come up with a definition and let's look at it.
Regarding 'This parameter could be of type integer and be measured in milliseconds' - we use ISO 8601 for all date/time definitions so please use it for this param as well.
@LCaparelli +1 yes please come up with a definition and let's look at it.
Awesome, will do. :-)
Regarding 'This parameter could be of type integer and be measured in milliseconds' - we use ISO 8601 for all date/time definitions so please use it for this param as well.
Right! No problem.
|
2025-04-01T04:35:28.674733
| 2020-12-01T22:20:55
|
754770269
|
{
"authors": [
"jorgenj",
"manuelstein",
"ricardozanini",
"tsurdilo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10714",
"repo": "serverlessworkflow/specification",
"url": "https://github.com/serverlessworkflow/specification/pull/194"
}
|
gharchive/pull-request
|
Adding workflow 'visibility' parameter
Signed-off-by: Tihomir Surdilovic<EMAIL_ADDRESS>Many thanks for submitting your Pull Request :heart:!
Please specify parts this PR updates:
[ x] Specification
[x ] Schema
[ ] Examples
[ ] Usecases
[ ] Extensions
[ ] Roadmap
[ ] Use Cases
[ ] Community
[ ] TCK
[ ] Other
What this PR does / why we need it:
Adds workflow scope. A visibility setting that can be either "public" or "private"
Special notes for reviewers:
Additional information (if needed):
From the discussion in the weekly call, i think there's 2 aspects to this idea:
a) authorization; whom can invoke a workflow, which actors/persons has the appropriate authorization to invoke a workflow. To me, AuthN and AuthZ feel like something that will be very runtime specific and should be left out of the spec.
b) Invocation methods; or 'how' can a workflow be invoked. For example, public implies some API can be called to invoke the service as well as potentially via event or as a sub-workflow, where-as private implies that only a sub-workflow invocation will be allowed (if I understood the idea correctly). Perhaps instead of adding scope, we should have the definition explicitly list the ways in which it can be invoked? We already do this for event-invoked and scheduler-invoked workflows. Currently, iirc, the spec doesn't really say much about api-invoked workflows, perhaps instead of implying it we should have definitions list when they can be API-invoked, and same for when they can be sub-workflow-invoked?
To give a concrete example, what if we changed slightly the way 'Start' for a workflow state could be defined such that you list there the exact ways in which this workflow is allowed to be invoked. For example:
{
...
"states":[
{
"start": {
"kind": ["subWorkflow","scheduled"],
"schedule": ...
},
"name":"firstState",
...
},
{
"name":"secondState",
...
}
],
...
}
There's probably a better way to model this approach, but hopefully that gets the idea across. In that example, that tells us that this workflow can be invoked by schedule or as a sub-workflow (but it's missing the 'default', so it can be invoked as a by some external api call).
Thanks, @jorgenj, for summarizing these two aspects.
I can think of a third one where the workflow definition is either visible by just its creator or to all developers on the same runtime (like GitHub repos, Docker container images or OpenStack VM images), but that's a property that should be managed by the runtime IMHO, so that's why I said scope would fit as an annotation. I think such annotations would for now go into metadata.
Invocation methods
Regarding workflow invocation, we have the special handling of Event-type states, that, if defined as a starting state, control when the workflow instances should be created. To me, a webhook invocation, OpenAPI binding, scheduled execution or invocation as a sub-flow are just additional ways to bind a workflow definition to invocation methods.
Some trivia: Argo and Tekton are treating workflow invocations as resources, i.e. a Workflow CRD (Argo) or TaskRun/PipelineRun CRD (Tekton) instance is created as invocation. Their coupling with Events uses event gateways that receive an event and create the resource as a result. IIRC, OpenWhisk also treats the invocation as a resource. Likewise, invocations in OpenStack Mistral are treated as addressable resources in a RESTful API.
Personally, I believe it shouldn't matter what creates the invocation. I think they're just bindings of the workflow specification to interfaces, whether it's a timer, a CloudEvent, a service function or a RESTful API. I'd prefer to have some common approach to any of these invocations/workflow bindings. CloudEvents are consumed in the EventState, a scheduled start state can trigger any state but with no input and we're now searching for a way to consume an exposed API call (e.g. a webhook call).
Should we create a special state type for this that can be called using an OpenAPI call? (Like the event state that can receive a CloudEvent)
Would you rather unify it all in a global section that names all the possible invocations and their starting states? (like in Moore/Mealy automata that have a starting state with a bunch of conditions/transitions that put the workflow in action)
Should we unify this in the start struct and also allow event/function triggers like @jorgenj suggested?
I just wanted to add a simple construct that marks a workflow as "exposed publicly" or "not exposed publicly". Possibly made the text sounds like its tied to invocations, which is not. This is just simple, for example a public or private class or method in Java. Thats all this is ;)
rebased
@manuelstein @jorgenj After reading your comments I realized my pr was misleading (see my previous comment). I have updated this pr to:
call the added top-level workflow paral "visibility" as that is closer to its intended purpose
updated the parameter description
Please let me know if this is ok now. I think per your comments there is definitely things we can add to further define/restrict the invocation of workflows and I will look at that asap, but the purpose of this property/pr was not to deal with actual invocation changes.
Please review again if you have the chance.
I'm confused about what it means for something to be 'visible'. I think this implies that the spec needs to describe other things that come with this. For example, if a workflow is 'public' or 'private', then don't we need to describe the conditions under which a user/principal can see private workflows/APIs or not? And how to determine what type of user a user/principal is (public vs. private). IIRC, the spec doesn't currently describe the invocation API at all, but this seems to be adding restrictions to the process of invocation (ie, who is the user and can they access private workflows).
@jorgenj I think you might be mixing up visibility and access control here.
A workflow being "private" just means it cannot be instantiated from outside and allows us to declare subflows that cannot be triggered as standalone (thus "private"). We will hopefully soon be adding access control to the language, but this is not part of those efforts. HTH
Sorry, @tsurdilo, but I also can't follow what is "outside". I see the current idea is to make a distinction between a workflow being used as a subflow only (private) and a workflow being exposed through an API or Event (public). I don't think that the param is addressing this issue the right way. Whether or not a workflow is bound to an API (gateway) exposes it to external calls (with and without authorization, i.e. allowing the public or only authenticated and authorized users to invoke them).
What confuses me is: when a workflow is marked private and I have more than one tenant on my system, can tenant A still invoke tenant B's workflow as a subflow? And if no, what would I choose if I don't want it exposed through the external-facing API, but still I want everyone on my platform to be able and use it as a subflow?
Should a workflow that can be triggered by an event (starting state is Event-type) still exposed through an API?
IIUC, we want to have workflows that have no external-facing exposure (neither events, nor API calls), so I think we should make these bindings optional then.
@manuelstein
regarding:
when a workflow is marked private and I have more than one tenant on my system, can tenant A still invoke tenant B's workflow as a subflow? And if no, what would I choose if I don't want it exposed through the external-facing API, but still I want everyone on my platform to be able and use it as a subflow?
Yes. Marking a workflow as "private" means it can be used by other workflows only (via the workflowId parameter in parallel states or subflow states etc)
regarding:
Should a workflow that can be triggered by an event (starting state is Event-type) still be exposed with an URL endpoint?
It can, up to the runtimes how they deal with even delivery. See https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md for example. Even if the event delivery is a message broker for example, exposing workflow still as a restful service might be very useful for testing scenarios.
IIUC, we want to have workflows that have no external-facing exposure (neither events, nor API calls), so I think we should make these bindings optional then.
Only API calls. If a public workflow triggers exec of a private subflow and that subflows includes an event state, consumption of events should be possible for it.
rebased
Should a workflow that can be triggered by an event (starting state is Event-type) still be exposed with an URL endpoint?
Hey @manuelstein just adding to @tsurdilo reply, CloudEvents can also be sent via HTTP, so a REST endpoint to receive the event can be an alternative for such cases. Knative for instance makes POST requests to the service root path (/) in the root path.
I think subflow: true could work on scenarios where the workflow won't be trigger by any other means but from a parent workflow. These sub-flows won't receive or react upon events. Not sure if this is what we want to achieve in this PR.
I agree that visibility: true could confuse people with access, but I prefer this option to subflow.
I think subflow: true could work on scenarios where the workflow won't be trigger by any other means but from a parent workflow. These sub-flows won't receive or react upon events. Not sure if this is what we want to achieve in this PR.
I agree that visibility: true could confuse people with access, but I prefer this option to subflow.
After meeting discussions, we decided to close this for now and may re-look if there is a need for it down the line
After meeting discussions, we decided to close this for now and may re-look if there is a need for it down the line
|
2025-04-01T04:35:28.676824
| 2023-11-04T07:08:44
|
1977208674
|
{
"authors": [
"lukehutch"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10715",
"repo": "serverpod/serverpod",
"url": "https://github.com/serverpod/serverpod/issues/1534"
}
|
gharchive/issue
|
Queries are not using aliases
Describe the bug
Here is some SQL generated in a query:
SELECT
"endorsement"."id" AS "endorsement.id",
"endorsement"."endorserUserId" AS "endorsement.endorserUserId",
"endorsement"."endorseeUserId" AS "endorsement.endorseeUserId",
"endorsement"."howKnow" AS "endorsement.howKnow",
"endorsement"."endorsementText" AS "endorsement.endorsementText",
"endorsement"."approved" AS "endorsement.approved",
"endorsement"."date" AS "endorsement.date"
FROM "endorsement"
WHERE ("endorsement"."endorseeUserId" = 50 AND ((FALSE OR "endorsement"."approved" = true)
OR "endorsement"."endorserUserId" = 51))
Why are fields referenced in the form "endorsement"."endorseeUserId" in the WHERE clause, if all that effort was put in to define aliases like "endorsement.endorseeUserId"?
Sorry, I think I misunderstood the purpose of these aliases now that I looked at a more complex query. Closing.
|
2025-04-01T04:35:28.677914
| 2024-08-14T20:12:35
|
2466746735
|
{
"authors": [
"hidayet-poscu"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10716",
"repo": "serverpod/serverpod",
"url": "https://github.com/serverpod/serverpod/issues/2610"
}
|
gharchive/issue
|
Unable to Find Expected Components on EC2 Instance Deployed with Terraform
After following the "AWS EC2 with Terraform" docs, I successfully deployed an EC2 instance. However, upon accessing the instance, I found no Dart or Serverpod on the instance as expected.
It did not cause any errors during the installation phase.
config.auto.tfvars
instance_type = "t2.micro"
instance_ami = "ami-0ca285d4c2cda3300"
You need to update this field according to its own field in AWS.
documentation is insufficient and youtube videos aws are missing and incorrect
|
2025-04-01T04:35:28.681359
| 2024-07-28T12:51:16
|
2433917480
|
{
"authors": [
"SandPod",
"lukehutch"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10717",
"repo": "serverpod/serverpod",
"url": "https://github.com/serverpod/serverpod/pull/2546"
}
|
gharchive/pull-request
|
Copy fields of ServerException across to DatabaseException
All of the context of a PgException (specifically, the context in its subclass, ServerException) is lost when only the message is preserved in a DatabaseException. This means that the user cannot be notified, for example, of the character offset of a syntax error, which can make debugging query errors very difficult (you basically have to print out the query on the server console, copy/paste it into a proper Postgres client UI, and see where the error is highlighted).
This change copies all the fields of ServerException across to DatabaseException in a backwards-compatible way.
I will leave it up to you to add some of this extra info to the error logs, so that Serverpod Insights can report more info on database errors.
Pre-launch Checklist
[x] I read the Contribute page and followed the process outlined there for submitting PRs.
[x] This update contains only one single feature or bug fix and nothing else. (If you are submitting multiple fixes, please make multiple PRs.)
[x] I read and followed the Dart Style Guide and formatted the code with dart format.
[x] I listed at least one issue that this PR fixes in the description above.
[x] I updated/added relevant documentation (doc comments with ///), and made sure that the documentation follows the same style as other Serverpod documentation. I checked spelling and grammar.
[ ] I added new tests to check the change I am making.
[ ] All existing and new tests are passing.
[ ] Any breaking changes are documented below.
@lukehutch - Would it be OK if I take this PR over and get it in?
@SandPod please do
Sorry, the close here was not intentional. An incorrect reference to this PR accidentally closed the issue.
This feature was implemented in: https://github.com/serverpod/serverpod/pull/3047
|
2025-04-01T04:35:28.714251
| 2016-01-05T18:19:40
|
125022123
|
{
"authors": [
"fmmartins",
"jdm"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10722",
"repo": "servo/crowbot",
"url": "https://github.com/servo/crowbot/issues/27"
}
|
gharchive/issue
|
Add a command to find unassigned PRs
We use the assignee for a PR in servo/servo to indicate who is responsible for reviewing the changes. We should add a command that finds all of the PRs with no assignee and reports the title and url, one per line.
Hi, will work on it :+1:
Great! Feel free to ask questions about anything that's confusing!
Got it!
I think it would be better to abstract the searchGithub function, which currently is hardcoded to search within the /issues endpoint.
Since I have to make a request to the /pulls endpoint, it would be cleaner to abstract away searchGithub and pass an argument with the desired endpoint! What are your thoughts on this?
Yep, sounds sensible.
Alright! Should I create a separate PR for that?
Separate commits in the same PR would be great.
/close :stuck_out_tongue:
|
2025-04-01T04:35:28.858568
| 2021-12-29T13:27:19
|
1090520943
|
{
"authors": [
"bradub",
"pdomineaux"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10723",
"repo": "sethvargo/vault-on-gke",
"url": "https://github.com/sethvargo/vault-on-gke/issues/101"
}
|
gharchive/issue
|
terraform plan issue on recently clone repo
What did you expect to happen?
I expected to see the terraform execution plan and see what ressources will be deployed
What actually happened?
I got an error saying that some arguments are missing
Output
terraform plan --var "billing_account=$BILLING_ACCOUNT" --var "org_id=$ORG_ID"
Error: Missing required argument
on gcp.tf line 278, in resource "google_container_cluster" "vault":
278: workload_metadata_config {
The argument "mode" is required, but no definition was found.
Error: Unsupported argument
on gcp.tf line 279, in resource "google_container_cluster" "vault":
279: node_metadata = "SECURE"
An argument named "node_metadata" is not expected here.
Error: Unsupported argument
on gcp.tf line 293, in resource "google_container_cluster" "vault":
293: username = ""
An argument named "username" is not expected here.
Error: Unsupported argument
on gcp.tf line 294, in resource "google_container_cluster" "vault":
294: password = ""
An argument named "password" is not expected here.
Additional context
I just clone the repo and set the required variables: billing_account and org_id
terraform --version
Terraform v0.12.31
+ provider.google v3.90.1
+ provider.google-beta v4.5.0
+ provider.kubernetes v2.7.1
+ provider.random v3.1.0
+ provider.tls v3.1.0
@pdomineaux hello, did you manage to bypass those issues?
@pdomineaux hello, did you manage to bypass those issues?
Nope, I still don't have a solution to this problem. Any clue?
|
2025-04-01T04:35:28.865319
| 2021-02-17T13:57:13
|
810207815
|
{
"authors": [
"jingpengw",
"nkemnitz",
"william-silversmith"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10724",
"repo": "seung-lab/cloud-files",
"url": "https://github.com/seung-lab/cloud-files/issues/43"
}
|
gharchive/issue
|
Maximum supported image dimension is 65500 pixels
I am using jpeg compression of images but got an error.
My image chunk size is 256x256x256. it exceeds the dimension limit of simplejpeg package.
is there any option to increase this limit?
File "/Users/jwu/opt/anaconda3/envs/cf/lib/python3.7/site-packages/cloudvolume/datasource/precomputed/image/tx.py", line 270, in do_upload [84/1929]
encoded = chunks.encode(imgchunk, meta.encoding(mip), meta.compressed_segmentation_block_size(mip))
File "/Users/jwu/opt/anaconda3/envs/cf/lib/python3.7/site-packages/cloudvolume/chunks.py", line 53, in encode
return encode_jpeg(img_chunk)
File "/Users/jwu/opt/anaconda3/envs/cf/lib/python3.7/site-packages/cloudvolume/chunks.py", line 104, in encode_jpeg
quality=quality,
File "simplejpeg/_jpeg.pyx", line 474, in simplejpeg._jpeg.encode_jpeg
ValueError: Maximum supported image dimension is 65500 pixels
2021-02-17T13:52:41Z <Greenlet at 0x7fdba85ef290: realupdatefn> failed with ValueError
Hi Jingpeng!
It seems like the JPEG standard sets the maximum size of an image at 65535 x 65535.
https://en.wikipedia.org/wiki/JPEG
The there is a discrepancy in that error message (65500) and I can contact the author about it, but it's close enough to the real value that using smaller chunks is probably the way to go.
good point. I can reduce chunk size.
On Wed, Feb 17, 2021 at 11:03 AM William Silversmith <
<EMAIL_ADDRESS>wrote:
Hi Jingpeng!
It seems like the JPEG standard sets the maximum size of an image at 65535
x 65535.
https://en.wikipedia.org/wiki/JPEG
The there is a discrepancy in that error message (65500) and I can contact
the author about it, but it's close enough to the real value that using
smaller chunks is probably the way to go.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/seung-lab/cloud-files/issues/43#issuecomment-780661226,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AB2MB5LDINIXHKADJ5INKWDS7PSERANCNFSM4XYM2LNA
.
It should be enough to lower the z dimension just a bit. The 3D chunk will be converted into a film strip, with the xy tiles side-by-side.
Looks like the reason 65500 is used is the following line from libjpeg-turbo:
#define JPEG_MAX_DIMENSION 65500L /* a tad under 64K to prevent overflows */
|
2025-04-01T04:35:28.877072
| 2022-08-12T10:17:53
|
1337020697
|
{
"authors": [
"Febus123"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10725",
"repo": "seydx/homebridge-camera-ui",
"url": "https://github.com/seydx/homebridge-camera-ui/issues/627"
}
|
gharchive/issue
|
Video stream stops with audio enabled
As soon as I enable audio, the video stream stops working.
VLC stream works fine incl audio.
Wansview W6 camera.
To Reproduce
Disabling audio resolves the video issue.
Expected behavior
Turning on audio enables audio for homekit video stream.
Codec issue.
|
2025-04-01T04:35:28.921427
| 2016-09-30T20:28:47
|
180395423
|
{
"authors": [
"alexcrichton",
"frewsxcv",
"sfackler"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10726",
"repo": "sfackler/rust-native-tls",
"url": "https://github.com/sfackler/rust-native-tls/pull/8"
}
|
gharchive/pull-request
|
Don't install OpenSSL on appveyor
Also don't need config on OSX as we're using SecureTransport there.
FYI, presumably relevant build fails on Appveyor.
We use OpenSSL as a client for server tests
Aha, nevermind!
|
2025-04-01T04:35:28.962341
| 2016-01-07T04:21:45
|
125319080
|
{
"authors": [
"howdoicomputer",
"jszwedko"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10727",
"repo": "sfbrigade/adopt-a-drain",
"url": "https://github.com/sfbrigade/adopt-a-drain/issues/109"
}
|
gharchive/issue
|
Document how the app works for contributors
Documenting application components like the load_drains rake task and how it is used to populate the database. At least, I think that's what it does. This would help contributors quickly onboard themselves and be able to contribute as well as help other brigades fork this application for their own changes.
I don't mind doing this.
@howdoicomputer good call, I also realized this tonight and had pushed https://github.com/sfbrigade/adopt-a-drain/commit/482a54b331c062d402d7a235e82d9a6190bde047 . The docs could probably use a little more polishing around development though. Let me know what you think.
I was thinking that I would improve the wiki with architectural information on how the application functions.
So the localizing the application wiki article is very useful and was thinking about writing information that is moving down a similar line but more focused on how the code works internally so a potential contributor can grok the codebase more quickly.
I'd probably start from the perspective of models.
User Model ->
* How does authentication work
* How does session handling work
Thing Model ->
* What is a 'thing'
Reminder Model ->
* How does reminders work
Then from a front-end perspective I can go over main.js.erb and document that as well since it's almost 800 lines of JavaScript code that does... stuff.
Let me know if this is welcome and I can get started on going through the codebase.
@howdoicomputer I think that would be very useful work to make it easier for new developers to quickly ramp up and contribute. Thanks for starting this discussion!
|
2025-04-01T04:35:28.967897
| 2016-11-04T00:01:19
|
187221554
|
{
"authors": [
"jeanwalshie",
"jszwedko"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10728",
"repo": "sfbrigade/adopt-a-drain",
"url": "https://github.com/sfbrigade/adopt-a-drain/issues/197"
}
|
gharchive/issue
|
Add a ticker to site with current # of drains adopted
Wouldn't that be cool??
:+1: I like this idea.
i got another inquiry from Vancouver today (in addition to other inquiries from media and others) about the current # of drains adopted. Just wanting to make sure this doesn't drop off the radar! :)
@jeanwalshie took a stab at this in #216 which ended up with something that looks like:
What do you think?
cool! you got it to work!? amazing, @jszwedko!
the way it displays it almost looks like the drains have been adopted by that person. or maybe that's because the sample number only 3 vs. 1112.
i would suggest:
moving it up higher, maybe even just below the green logo.
can it say TOTAL DRAINS ADOPTED:
and maybe in smaller font last updated... (?) This would be to show that it's current/real time info.
@jeanwalshie :+1: thank you for the feedback! What do you think of something like:
i like it! what do you think about having it say "Total Drains Adopted in SF:"? I added "in SF" to make it clear that it's a total number, vs. a number adopted by the user. BUT this confusion will go away when the number is 1250 vs. 3 :)
Works for me! How about:
X drains adopted in SF
to make it a bit shorter.
Merged this in, let me know what you think @jeanwalshie !
|
2025-04-01T04:35:28.977871
| 2015-07-01T02:27:57
|
92248235
|
{
"authors": [
"afxjzs",
"heidar",
"mshibuya"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10729",
"repo": "sferik/rails_admin",
"url": "https://github.com/sferik/rails_admin/issues/2350"
}
|
gharchive/issue
|
PG::UndefinedTable in RailsAdmin::MainController#dashboard
I'm getting this error, even after a complete uninstall and reinstall. Rails_admin was working fine, and then it just stopped. I thought it was enum related, but i've used the initializer at https://gist.github.com/dmilisic/38fcd407044ace7514df and confirmed that ActiveRecord::RailsAdminEnum is loaded.
It seems like no matter what i do, i get this error. I've rebuilt the database several times via migrations (rake db:setup) only, so it's not some zombie table or something. I'm totally stumped. I thought i had something when i found out that enums don't play nice out of the box, but i can't get anything but this error.
Here's the full details:
ERROR: relation "errors" does not exist LINE 5: WHERE a.attrelid = '"errors"'::regclass ^
Extracted source (around line #592):
590
591
592
593
594
595
def exec_no_cache(sql, name, binds)
log(sql, name, binds) { @connection.async_exec(sql, []) }
end
def exec_cache(sql, name, binds)
Is there something I am missing? I'm using pretty standard enum implementation and I renamed the rails_admin.rb to z_rails_admin.rb to make sure it loads after the enum initializer mentioned in that gist i linked to.
I guess i'm the only one getting this error...
Latest release 0.7.0 has built-in support for ActiveRecord::Enum, so try and see if it works.
If something breaks, please paste full stack trace here.
Please reopen on update.
In my gemfile.lock it says i'm using 0.7.0 so that's not the issue. Here's a gist of the full stack trace of the error: https://gist.github.com/afxjzs/828c9af28aa886f4b7d2
Any help is much appreciated. Thanks!
@mshibuya Can this please be reopened? I've posted the requested data. Please let me know what else i could post to help sort this out.
I was getting the same error cause I have objects in my models folder that do not have associated tables.
I fixed it by adding a line to the config:
config.included_models = %w(User Post) # whatever models you want included in rails_admin
Not sure if this applies to you though but figured I'd post it anyway
@heidar THANKS!
this was the issue. There's no objects in our models folder w/o tables, but the whitelisting was what fixed it.
Thanks for your help guys!
|
2025-04-01T04:35:28.980605
| 2016-11-12T04:10:57
|
188891055
|
{
"authors": [
"rileyjshaw"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10730",
"repo": "sferik/t",
"url": "https://github.com/sferik/t/issues/351"
}
|
gharchive/issue
|
t list add skips accounts
When I run t list add on a group of users, an inconsistent number of accounts are actually added to the list. As far as I can tell this has nothing to do with rate-limiting.
Test case:
t list create --private test
t followings | xargs t list add test
My output:
> t list create --private test
@rileyjshaw created the list "test".
> t followings | xargs t list add test
@rileyjshaw added 196 members to the list "test".
When I check the list, it has anywhere between 13 and 107 members.
I tested this a few times, always with > 15 minutes between tests. I then ran two immediately after one-another. The first group ended up having 27 members. The second had 48.
This is likely related to https://twittercommunity.com/t/nondeterminstic-behavior-for-lists-members-create-all/53640/22
|
2025-04-01T04:35:28.986727
| 2023-12-11T21:17:59
|
2036543448
|
{
"authors": [
"AndrewsBrewing"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10731",
"repo": "sfstar/hass-victron",
"url": "https://github.com/sfstar/hass-victron/issues/149"
}
|
gharchive/issue
|
GPS Info
Hello, I am new to this integration and new to a lot of things, but I just added a working gps sensor to my Cerbo. It is showing up in Remote Console as VRM instance 0 (zero).
But I can't find the lat and long in my entities. I reloaded the integration and rebooted Home Assistant. I'm not very smart. Am I doing something silly wrong?
Eric
I just realized I needed to check the box that says rescan the entities
|
2025-04-01T04:35:28.988617
| 2024-07-22T12:43:53
|
2422786477
|
{
"authors": [
"BlastyCZ",
"sfstar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10732",
"repo": "sfstar/hass-victron",
"url": "https://github.com/sfstar/hass-victron/issues/223"
}
|
gharchive/issue
|
Missing state for vebus entity
The reported value 244.0 for entity vebus state isn't a decobale value. Please report this error to the integrations maintainer
Enum is missing this particular value:
https://github.com/sfstar/hass-victron/blob/main/custom_components/victron/const.py#L173
Values from docs:
0=Off;1=Low Power;2=Fault;3=Bulk;4=Absorption;5=Float;6=Storage;7=Equalize;8=Passthru;9=Inverting;10=Power assist;11=Power supply;244=Sustain;252=External control
Will add this to an upcoming PR
|
2025-04-01T04:35:29.032102
| 2020-11-03T02:30:02
|
734969089
|
{
"authors": [
"scala-steward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10734",
"repo": "shadaj/scalapy",
"url": "https://github.com/shadaj/scalapy/pull/106"
}
|
gharchive/pull-request
|
Update sbt-sonatype to 3.9.5
Updates org.xerial.sbt:sbt-sonatype from 3.9.4 to 3.9.5.
GitHub Release Notes - Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.xerial.sbt", artifactId = "sbt-sonatype" } ]
labels: sbt-plugin-update, semver-patch
Superseded by #160.
|
2025-04-01T04:35:29.034927
| 2015-03-15T17:54:12
|
61874788
|
{
"authors": [
"dotemacs"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10735",
"repo": "shadchnev/zoopla",
"url": "https://github.com/shadchnev/zoopla/pull/1"
}
|
gharchive/pull-request
|
Update gem dependencies
Since the gem hasn't had any updates for 4 years and I needed to use
it... I decided to contribute. All the dependencies have been update
since the gem was originally created. And in the gemspec file they've
been locked down pretty tight. I've added somewhat looser constraints,
while allowing the gem to build and run with the latest stable Ruby
2.2.1.
Also this line:
https://github.com/shadchnev/zoopla/blob/master/test/test_api.rb#L36
and this line:
https://github.com/shadchnev/zoopla/blob/master/test/test_api.rb#L43
define the same test name, so Test Unit throws this error when
rake test
is run:
Started
.N
===============================================================================
TestAPI#test_actual_location_fallback was redefined [test_actual_location_fallback(TestAPI)]
/Users/alex/dev/ruby/zoopla/test/test_api.rb:67:in `<class:TestAPI>'
===============================================================================
...............................................
Finished in 0.027138 seconds.
48 tests, 159 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 1 notifications
100% passed
1768.74 tests/s, 5858.94 assertions/s
Maybe a rename would be in order? Not sure what would be the best name to go with.
Also removed the explicit require of RubyGems, see the commit message for details.
And set HTTPS for gem handling in the Gemfile.
Maybe I was too hasty with the removal of:
require 'rubygems'
Looking at the bundler documentation again, I can see this is advised:
http://bundler.io/bundler_setup.html
Whereas the link I shared is for bundler version 1.3.
Let me know what you want to go with and I'll amend the pull request.
Thanks for your time :smile:
|
2025-04-01T04:35:29.043402
| 2023-05-31T23:32:23
|
1735174600
|
{
"authors": [
"AlfredOdling",
"AmarnathN",
"bluengreen",
"da1z",
"i-m-abbhay",
"j-5-s",
"khanhquocnguyen",
"lightbluepoppy",
"lucas-quinn",
"pmabres",
"shadcn",
"vespertilian"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10736",
"repo": "shadcn/ui",
"url": "https://github.com/shadcn/ui/issues/516"
}
|
gharchive/issue
|
fontFamily is not applied in tailwind.config.js file
When I comment out the code below from tailwindcss.config.cjs file, I get sans-fonts displayed. Otherwise I get serif-fonts with the sans key somehow. How to get proper fonts?
fontFamily: {
sans: ['var(--font-sans)', ...fontFamily.sans],
},
@lightbluepoppy Are you using a src directory? I had this issue but I realized I didn't update the tailwind.config content to find files within my src directory. Now I'm having an issue with it not applying to dropdown menu.
This looks correct.
content: ['./src/**/*.{js,ts,jsx,tsx}'],
@lightbluepoppy I'm not using the app router so I had to do some other things that solved my dropdown menu typography issue. I needed to update the _app.tsx and the _document.tsx.
In the _document.tsx I had to add this
<body className={`min-h-screen bg-background ${fontSans.variable} font-sans antialiased`}>
and in the _app.tsx I added a html wrapper and added the font-sans there as well
<main className={`${fontSans.variable} font-sans`}>
I thought it was weird it needed to be added in both places. Not sure how relevant that is for your use case but maybe that will trigger some ideas. Good luck!
So if you are following the manual install guide @lightbluepoppy you'll find this issue as the css variable is not defined anywhere when doing a manual install 'var(--font-sans)' so resolving that fails and it falls back to a serif font. You can either define that css variable in the same css file you've defined the rest of the variables or you can just remove the variable usage and just use ...fontFamily.sans
Docs should be updated, this got me too - https://ui.shadcn.com/docs/installation/manual
Guys. have you been able to resolve the issue? I am getting same issue on my side.
Guys. have you been able to resolve the issue? I am getting same issue on my side.
See my proposed fix above!
Guys. have you been able to resolve the issue? I am getting same issue on my side.
See my proposed fix above!
Thanks @pmabres I have resolved the issue :). The issue was in tailwind.config.js file. There were two extend properties I was writing one for colors, borderRadius, animation (shadcn already written this) and one for fontFamliy (I was writing).
why it works here but not for us https://github.com/shadcn-ui/taxonomy/blob/651f984e52edd65d40ccd55e299c1baeea3ff017/tailwind.config.js#L62 ?
Just for anyone who does not like to read @pmabres solution a looks like this:
from
fontFamily: {
sans: ["var(--font-sans)", ...fontFamily.sans],
},
to
fontFamily: {
sans: [...fontFamily.sans],
},
Worked for me thanks!
the following worked for me after adding the import statement in tailwind.config.js
import { fontFamily } from "tailwindcss/defaultTheme"
module.exports = {
...,
theme:{
...
fontFamily: {
sans: ["var(--font-sans)", ...fontFamily.sans],
},
}
..
}
This issue has been automatically closed because it received no activity for a while. If you think it was closed by accident, please leave a comment. Thank you.
@shadcn : I followed the manual instruction and face the same issue.
https://ui.shadcn.com/docs/installation/manual
Try removing "var(--font-sans)", worked for me :)
It looks like the manual documentation guide suggests an undefined fontFamily. Removing fontFamily altogether solves the issue.
|
2025-04-01T04:35:29.074273
| 2016-03-19T06:22:54
|
142033405
|
{
"authors": [
"Alienero",
"BlueSplash",
"LibertyLocked",
"arthurkiller",
"haroldrandom"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10737",
"repo": "shadowsocks/shadowsocks-go",
"url": "https://github.com/shadowsocks/shadowsocks-go/issues/120"
}
|
gharchive/issue
|
请问ss-go服务端怎么查看日志?
服务端是linux,没搜索到log之类的文件。。
I think log information will print into std out. You can redirect std to a file.
How to redirect std to a file?
./shadowsocks-server & > ss.log
Thanks.
Can I config the log level?
@haroldrandom sure
|
2025-04-01T04:35:29.086230
| 2015-10-20T18:40:36
|
112432764
|
{
"authors": [
"justin808",
"mezod"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10738",
"repo": "shakacode/bootstrap-sass-loader",
"url": "https://github.com/shakacode/bootstrap-sass-loader/issues/33"
}
|
gharchive/issue
|
getting it to work
Hi, I am having some trouble to get it to work,
I first installed bootstrap-sass-loader
npm install --save bootstrap-sass-loader
which also installed bootstrap-sass.
Then I added
require('bootstrap-sass-loader');
to my index.js.
I want to use the full bootstrap, so I don't need to customize it. However, this doesn't seem to work when I define a
<span class="glyphicon glyphicon-plus" aria-hidden="true"></span>
I added
{
test: /bootstrap\/js\//,
loader: 'imports?jQuery=jquery',
},
{
test: /\.woff(\?v=\d+\.\d+\.\d+)?$/,
loader: 'url?limit=10000&mimetype=application/font-woff',
},
{
test: /\.woff2(\?v=\d+\.\d+\.\d+)?$/,
loader: 'url?limit=10000&mimetype=application/font-woff',
},
{
test: /\.ttf(\?v=\d+\.\d+\.\d+)?$/,
loader: 'url?limit=10000&mimetype=application/octet-stream',
},
{
test: /\.eot(\?v=\d+\.\d+\.\d+)?$/,
loader: 'file',
},
{
test: /\.svg(\?v=\d+\.\d+\.\d+)?$/,
loader: 'url?limit=10000&mimetype=image/svg+xml',
}
to my webpack config, with no luck. Am I missing something here? Thanks!
@mezod Please see how we're using this here:
https://github.com/shakacode/react-webpack-rails-tutorial/blob/master/client/webpack.client.hot.config.js
And please keep me posted on getting this solved.
thanks, apparently I was doing it alright but I had corrupted packages... rm -rf node_modules + npm install and it worked as supposed to :-)
|
2025-04-01T04:35:29.155106
| 2022-06-07T19:26:24
|
1263765496
|
{
"authors": [
"gomesalexandre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10741",
"repo": "shapeshift/web",
"url": "https://github.com/shapeshift/web/pull/1974"
}
|
gharchive/pull-request
|
chore: use AddressZero const from @ethersproject/constants
Description
This replaces all occurences of 0x0000000000000000000000000000000000000000 with imported AddressZero from ethers.js:
https://github.com/ethers-io/ethers.js/blob/a71f51825571d1ea0fa997c1352d5b4d85643416/packages/constants/src.ts/addresses.ts#L1
Notice
[x] Have you followed the guidelines in our Contributing guide?
[x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
Pull Request Type
[ ] :bug: Bug fix (Non-breaking Change: Fixes an issue)
[x] :hammer_and_wrench: Chore (Non-breaking Change: Doc updates, pkg upgrades, typos, etc..)
[ ] :nail_care: New Feature (Breaking/Non-breaking Change)
Issue (if applicable)
N/A
Risk
N/A, this is the same string
Testing
N/A
Screenshots (if applicable)
Looks good.
One thought, did we want to make our own AddressZero constant and put it somewhere in lib to be less dependant on an external package? I know we already use ethers, though.
Yeah definitely! As you mentioned, us already having ethers.js as a dependency is what allowed me to import it here without needing to think twice before adding a new dependency.
Extending that thought, we might want a "constants" package, as the current chainId and assetId constants don't quite feel right in the caip package, but it is the best place for them for now.
Agreed. I've also been giving it some thought with the chain agnosticism refactor going on, and all these constants don't really fit in caip and a whole new lib package might give them a better home. If/when (?) we create it, we can have AddressZero exported there!
|
2025-04-01T04:35:29.174393
| 2023-01-17T21:35:30
|
1537066279
|
{
"authors": [
"gomesalexandre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10742",
"repo": "shapeshift/web",
"url": "https://github.com/shapeshift/web/pull/3640"
}
|
gharchive/pull-request
|
feat: use either savers previous address or highest balance xpub address as from
Description
This PR ensures we use either an arbitrary chosen address (the one with the most balance as accumulated UTXOs) or the previously deposited from address as from, meaning UTXOs will be filtered to be the one from it, and they will be used as a change address, see https://github.com/shapeshift/lib/pull/1168
This also brings the notion of either one or two Txs being broadcasted in the same block:
At best, one Tx if the picked address has enough UTXO value
At worst, a pre-Tx to send max to that address, and then the originally intended Tx
Notes:
this only applies to UTXOs, since with other blockchains we support, the account is the address so we don't have such problem.
Notice
[x] Have you followed the guidelines in our Contributing guide?
[x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
Pull Request Type
[ ] :bug: Bug fix (Non-breaking Change: Fixes an issue)
[ ] :hammer_and_wrench: Chore (Non-breaking Change: Doc updates, pkg upgrades, typos, etc..)
[x] :nail_care: New Feature (Breaking/Non-breaking Change)
Issue (if applicable)
N/A
Risk
None, isolated to savers
Testing
Engineering
It is possible to deposit into UTXO savers
The change is sent to the address which input(s) was/were used
Ensure you can re-deposit into the same savers opportunities and confirm the input(s) address is the same as the original Tx
The change is sent to the address which input(s) was/were used
Send max will not work - gas fee to be deducted for it in a follow-up PR, as well as making sure we send enough dust for one or a few re-deposits/withdraws in a follow-up PR
Operations
Nothing to see here
Screenshots (if applicable)
Current dependencies on/for this PR:
develop
PR #3583
PR #3587
PR #3600
PR #3598
PR #3612
PR #3614
PR #3626
PR #3613
Current dependencies on/for this PR:
develop
PR #3583
PR #3587
PR #3600
PR #3598
PR #3612
PR #3614
PR #3626
PR #3613
Opening for early review
|
2025-04-01T04:35:29.186019
| 2018-05-15T03:40:20
|
323055747
|
{
"authors": [
"codecov-io",
"saaavsaaa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10743",
"repo": "sharding-sphere/sharding-sphere",
"url": "https://github.com/sharding-sphere/sharding-sphere/pull/830"
}
|
gharchive/pull-request
|
check force index syntax error
check force index syntax error
like FORCE INDEX () or FORCE INDEX (,)
Codecov Report
Merging #830 into dev will decrease coverage by 0.04%.
The diff coverage is 33.33%.
@@ Coverage Diff @@
## dev #830 +/- ##
=========================================
- Coverage 63.24% 63.2% -0.05%
=========================================
Files 494 494
Lines 9720 9723 +3
Branches 1600 1601 +1
=========================================
- Hits 6147 6145 -2
- Misses 3226 3232 +6
+ Partials 347 346 -1
Impacted Files
Coverage Δ
...ing/parser/clause/TableReferencesClauseParser.java
78.87% <33.33%> (-2.01%)
:arrow_down:
...spring/datasource/SpringMasterSlaveDataSource.java
83.33% <0%> (-16.67%)
:arrow_down:
...al/state/datasource/DataSourceListenerManager.java
50% <0%> (-4.55%)
:arrow_down:
...ternal/state/instance/InstanceListenerManager.java
38.46% <0%> (-3.85%)
:arrow_down:
.../internal/config/ConfigurationListenerManager.java
71.42% <0%> (-3.58%)
:arrow_down:
...ce/OrchestrationShardingDataSourceFactoryBean.java
77.77% <0%> (+11.11%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c905945...1145760. Read the comment docs.
|
2025-04-01T04:35:29.189228
| 2020-12-16T12:20:07
|
768802332
|
{
"authors": [
"swarley",
"xbenjii"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10744",
"repo": "shardlab/discordrb",
"url": "https://github.com/shardlab/discordrb/issues/5"
}
|
gharchive/issue
|
Support new slash commands
Describe your feature request in as much detail as you can here.
It is a feature supported by Discord's API, but missing from the library
https://discord.com/developers/docs/interactions/slash-commands
The python example is
url = "https://discord.com/api/v8/applications/<my_application_id>/commands"
json = {
"name": "blep",
"description": "Send a random adorable animal photo",
"options": [
{
"name": "animal",
"description": "The type of animal",
"type": 3,
"required": True,
"choices": [
{
"name": "Dog",
"value": "animal_dog"
},
{
"name": "Cat",
"value": "animal_dog"
},
{
"name": "Penguin",
"value": "animal_penguin"
}
]
},
{
"name": "only_smol",
"description": "Whether to show only baby animals",
"type": 5,
"required": False
}
]
}
# For authorization, you can use either your bot token
headers = {
"Authorization": "Bot 123456"
}
# or a client credentials token for your app with the applications.commmands.update scope
headers = {
"Authorization": "Bearer abcdefg"
}
r = requests.post(url, headers=headers, json=json)
Anyone waiting for this feature can view our project for it to see the progress. I hope to get this finished by Jan 2021.
|
2025-04-01T04:35:29.205118
| 2017-09-24T15:16:22
|
260092214
|
{
"authors": [
"Unisay",
"sharkdp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10745",
"repo": "sharkdp/purescript-bigints",
"url": "https://github.com/sharkdp/purescript-bigints/issues/4"
}
|
gharchive/issue
|
toBase
At the moment, its possible to parse a string representation of a base-n encoded number using fromBase :: Int -> String -> BigInt
However, its not currently possible to go other way around, from BigInt to its base-n representation (only decimal base is supported via toString)
So, having a method toBase :: Int -> BigInt -> String would be quite useful and make things isomorphic.
Sounds good!
Looks like this is supported by the underlying JavaScript module: https://github.com/peterolson/BigInteger.js#tostringradix--10
Here comes the PR... in 5 mins
Best regards,
Yuriy Lazarev
2017-09-24 17:30 GMT+02:00 David Peter<EMAIL_ADDRESS>
Sounds good!
Looks like this is supported by the underlying JavaScript module:
https://github.com/peterolson/BigInteger.js#tostringradix--10
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/sharkdp/purescript-bigints/issues/4#issuecomment-331717510,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA9oVxfWz7Adoh2uGTLlbybbSm2nB1aKks5slnWlgaJpZM4Ph66q
.
|
2025-04-01T04:35:29.207605
| 2024-10-02T07:35:54
|
2560860896
|
{
"authors": [
"PAVAN-VANAM",
"Riddhi12349",
"sharmavikas4"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10746",
"repo": "sharmavikas4/MERN_BLOG",
"url": "https://github.com/sharmavikas4/MERN_BLOG/issues/3"
}
|
gharchive/issue
|
Structure Backend
Hello @sharmavikas4
your backend all details are under one file index.js it's hard to understand and looks like an unstructured
I would like to structure the whole backend with routings and MVC architecture
I want to work on this project Can U assign me
Great issue, @PAVAN-VANAM. It will be really appreciable if you are able to structure the project and apply MVC architecture. Go ahead.
hi please assign me this issue
@Riddhi12349 it is already completed by @PAVAN-VANAM
|
2025-04-01T04:35:29.255255
| 2018-11-23T10:08:47
|
383764697
|
{
"authors": [
"phamwon",
"weeryan17"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10747",
"repo": "shauleiz/vJoy",
"url": "https://github.com/shauleiz/vJoy/issues/24"
}
|
gharchive/issue
|
vJoy failed to install - Can't install driver - Windows 10 1809 B18282.1
I'm using lastest vJoy v2.1.8 Build 39
+++++++ +++++++ +++++++ +++++++ +++++++ +++++++ +++++++ +++++++ +++++++ +++++++
+++++++ +++++++ Fri Nov 23 17:01:52 2018
+++++++ +++++++ OS: 10.0 (x64)
main: DeviceHWID --> root\VID_1234&PID_BEAD&REV_0218 ; InfFile --> vJoy.inf
[I] FindInstalled: Searching for HWID root\VID_1234&PID_BEAD&REV_0218
[I] FindInstalled: Searching for HWID root\VID_1234&PID_BEAD
[I] Install: GetFullPathName --> C:\Program Files\vJoy\vJoy.inf
[I] Install: hwIdList --> root\VID_1234&PID_BEAD&REV_0218
[I] Install: SetupDiGetINFClass --> Class Name HIDClass
[I] Install: SetupDiCreateDeviceInfoList OK
[I] Install: SetupDiCreateDeviceInfo OK
[I] Install: SetupDiSetDeviceRegistryProperty OK
[I] Install: SetupDiCallClassInstaller OK
[I] Install: Starting cmdUpdate
[I] cmdUpdate: GetFullPathName --> C:\Program Files\vJoy\vJoy.inf
[I] cmdUpdate: Install: Starting cmdUpdate
[I] cmdUpdate: File newdev.dll loaded OK
[I] cmdUpdate: UPDATEDRIVERFORPLUGANDPLAYDEVICES got OK
[I] cmdUpdate: CMP_WaitNoPendingInstallEvents returned WAIT_OBJECT_0
[I] cmdUpdate: UPDATEDRIVERFORPLUGANDPLAYDEVICES(hwid=root\VID_1234&PID_BEAD&REV_0218, InfPath=C:\Program Files\vJoy\vJoy.inf) executed OK
[I] cmdUpdate returns code 0
[I] Install: Finished cmdUpdate
[I] Install: SetupDiGetDeviceInstanceId (Device Instance Path=ROOT\HIDCLASS\0000) OK
[I] Install() OK - No need to reboot
[I] GetParentDevInst: ParentDeviceNode = ROOT\HIDCLASS\0000 , CompatibleId = hid_device_system_game
[I] GetParentDevInst: Function CM_Locate_DevNode OK
[E] AssignCompatibleId: Function CM_Get_Child failed with error: 0000000D
[I] RemoveDevice: ParentDeviceNode = ROOT\HIDCLASS\0000
[I] RemoveDevice: Function CM_Locate_DevNode failed with error: 00000000
[I] RemoveDevice: Function SetupDiCreateDeviceInfoList OK
[I] RemoveDevice: Function CM_Get_Device_ID_Size OK
[I] RemoveDevice: Function CM_Get_Device_ID (Device Instance Path = ROOT\HIDCLASS\0000) OK
[I] RemoveDevice: Function SetupDiOpenDeviceInfo OK
[I] GetOEMInfFileName: Starting
[I] GetOEMInfFileName: Function SetupDiGetDeviceInstallParams OK
[I] GetOEMInfFileName: Function SetupDiSetDeviceInstallParams OK
[I] GetOEMInfFileName: Function SetupDiBuildDriverInfoList OK
[I] GetOEMInfFileName: Function SetupDiEnumDriverInfo for "vJoy Device" OK
[I] GetOEMInfFileName: Function SetupDiGetDriverInfoDetail OK. INF file is C:\WINDOWS\INF\oem2.inf
[I] GetOEMInfFileName: Function GetFullPathName OK. INF file is oem2.inf
[I] RemoveDevice: Going to remove file oem2.inf
[I] RemoveDevice: File oem2.inf removed
[I] RemoveDevice: Function SetupDiRemoveDevice OK
duplicate of #23 from the looks of it
|
2025-04-01T04:35:29.270727
| 2022-07-08T01:29:27
|
1298348541
|
{
"authors": [
"shaysingh818"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10748",
"repo": "shaysingh818/DTunes",
"url": "https://github.com/shaysingh818/DTunes/issues/10"
}
|
gharchive/issue
|
Migrate Video Loader To Rust
The urls library is responsible for grabbing YouTube urls and downloading their wav formats. The library also stores the URL/Wav file information in the database. Currently this process is using a python script that takes in a url as an argument and uses YouTube-dl to download the video locally. The rest of the code is written in C with multi threaded logic to call the script and download multiple URL's in parallel. Python is slow and the C code is being bottle necked by it.
Create a high level design document for how external audio files can be loaded into DTunes. This could involve grabbing audio files from YouTube, LimeWire or any free audio source. Rust has bindings for youtube-dl and a decent multi threading model. Migrate the URL loading library to rust. All the processes involving loading/fetching data should be done in RUST. All the audio processing related algorithms can remain in C.
This is a bad idea, closing this issue
|
2025-04-01T04:35:29.279661
| 2015-09-15T22:37:57
|
106662245
|
{
"authors": [
"nathantsoi",
"sheaivey"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10749",
"repo": "sheaivey/rx5808-pro-diversity",
"url": "https://github.com/sheaivey/rx5808-pro-diversity/issues/27"
}
|
gharchive/issue
|
Add pulldown resistor to RSSI.
In the event a receiver looses power or RSSI gets disconnected this will ensure there are no floating pins on the ATMEGA. This will make sure the other receiver fails over quickly.
Building one of these now, any suggestions on how strong to make the pulldown. Seems like 10k might be appropriate?
I'm using 100k. A 10k allowed to much current to flow to ground and the atmega received a very weak RSSI voltage.
100k does not mess with the overall RSSI signal very much but does give a true low when RSSI is disconnected.
Good luck with your build!
Awesome, 100k it is. Nice work w/ this project. I'm stoked to get mine running.
I have updated all the images to include the RSSI pulldown resistor.
|
2025-04-01T04:35:29.289854
| 2017-06-03T02:46:26
|
233339191
|
{
"authors": [
"codecov-io",
"freitagbr"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10750",
"repo": "shelljs/shelljs",
"url": "https://github.com/shelljs/shelljs/pull/730"
}
|
gharchive/pull-request
|
Add node 8 to CI
Fixes #729
Codecov Report
Merging #730 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #730 +/- ##
=======================================
Coverage 94.81% 94.81%
=======================================
Files 33 33
Lines 1254 1254
=======================================
Hits 1189 1189
Misses 65 65
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 15558cf...6b59e53. Read the comment docs.
|
2025-04-01T04:35:29.298865
| 2017-08-14T09:09:13
|
249973017
|
{
"authors": [
"Arman92",
"amir-abbas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10751",
"repo": "shelomentsevd/telegramgo",
"url": "https://github.com/shelomentsevd/telegramgo/issues/3"
}
|
gharchive/issue
|
Reconnect bug
@shelomentsevd @Lord-Protector
When reconnection occurs,expamle not work
This issue and some others have been fixed in L11R fork: https://github.com/L11R/mtproto
Simply replace the mtproto import path in your file until it is merged.
I correct my prev. post, the issue resists, and also I encountered other weird ones!
After fixing some bugs, it gets a -404 error for every message sent to server
|
2025-04-01T04:35:29.337902
| 2024-08-03T19:48:16
|
2446565379
|
{
"authors": [
"Justinfan591",
"Raqkkkkk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10752",
"repo": "sherwinokhowat/Group-Formation-System",
"url": "https://github.com/sherwinokhowat/Group-Formation-System/pull/120"
}
|
gharchive/pull-request
|
Add voice accessibility feature and other minor changes / features
Added voice accessibility feature
Overview
Uses google cloud text to speech service to generate voice for voiceovers. Has a setting (default on) to toggle whether this feature is enabled. Currently is implemented in most text fields and buttons (can add more if needed).
Steps for enabling service
The key for the service is a json file (will send via discord), and the main class takes in an env variable called GOOGLE_APPLICATION_CREDENTIALS where the value is the location of the json file. The app should work even if the feature is not enabled or the env variable is not set (should show some warnings in console only).
The json file should NOT be committed or pushed to Github.
Implementation details
Service is provided through the TextToSpeechService class in the api package. Views can use IHoverVoiceSerivce (for voiceover when hovering over components) or IPlayVoiceService (for playing sounds directly) to setup voiceovers (both are under view.services). Note that for hovering, setting up for Jtable is different from other components.
Added settings panel
Mainly for setting voice accessibility feature on or off, though could also be used for future settings.
Minor changes / features
Created config package for all the config files (the main reason so many files were changed)
Clean up some warnings (help addressed #118 partially)
A SafeCastCollectionServices for safely casting objects to collections of any classes (to remove warning)
Added @SuppressWarnings("FieldCanBeLocal") in many classes since I think it is more clear and organized to list all variables / components that are in the class at the top (also easier to edit them)
Minor changes to many ActionListener creation code (no change in functionality)
Minor changes to initialization of Collection variables (no change in functionality)
Make some errors (non Runtime ones) print the error message to console rather than print stack trace
Clean up unused imports
Add loading bar when starting up since it would take some time to initialize the sounds for voiceover.
Refactored panels for displaying tag to a new class TagPanel under view.components since it was used in multiple views
Added env files (.env.* and *.env) and secret folder (/secrets/) to .gitignore
Added special settings for fun features
Can be enabled or disabled in the special settings config
Currently has two settings (reverse voiceover and more challenging verification) that are implemented
I see that there is conflict that needs to be resolved
Should be resolved (pray no more conflicts plz)
|
2025-04-01T04:35:29.354528
| 2018-05-02T10:44:31
|
319502174
|
{
"authors": [
"juicycool92",
"shimat"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10753",
"repo": "shimat/opencvsharp",
"url": "https://github.com/shimat/opencvsharp/issues/489"
}
|
gharchive/issue
|
where is subspaceProject()?
Summary of your issue
Hello, im new of Cv, therefore i might be way wrong approach.
here's the thting what i done and want to do.
i try to follow face recognization tutorial from official document.
its for OpenCv, not OpenCvSharp. but i figured out CvSharp has preety much every method or mecro in OpenCv. therefore i keep typing and converting for CvSharp.
it was fine until i met subspaceProject() and subspaceReconstruct()
i try to find every source file and google search, but no luck to me.
Anyone please tell me where to find subspaceProject() and subspaceReconstruct()?
Thanks.
Environment
Vs2017 C# Wpf
What did you do when you faced the problem?
trying to calling'em
Example code:
Mat projection = subspaceProject()
Output:
not declared subspaceProject method
What did you intend to be?
trying to follow fisherface tutorial via here
I added LDA.SubspaceProject in #495. thanks.
|
2025-04-01T04:35:29.367000
| 2015-08-19T22:50:15
|
102017019
|
{
"authors": [
"msranade",
"shinnn"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10754",
"repo": "shinnn/broccoli-clean-css",
"url": "https://github.com/shinnn/broccoli-clean-css/pull/7"
}
|
gharchive/pull-request
|
Changes to incorporate latest version of broccoli-filter
This is done to take advantage of caching introduced in latest broccoli-filter.
:clap: Thanks for the great contribution, @msranade!
Which SemVer level should I update the npm package with, patch, minor or major? I haven't tracked the recent updates of broccoli-filter.
This change doesn't introduce any API changes so minor version bump should be fine.
|
2025-04-01T04:35:29.373891
| 2017-10-19T13:22:11
|
266840440
|
{
"authors": [
"jaredgalanis",
"rwwagner90"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10755",
"repo": "shipshapecode/ember-shepherd",
"url": "https://github.com/shipshapecode/ember-shepherd/pull/168"
}
|
gharchive/pull-request
|
Fix copy styles
Fixes the position of the highlightOverlay when copyStyles is true.
Looks great, thanks!
|
2025-04-01T04:35:29.379143
| 2023-09-01T07:35:36
|
1876909639
|
{
"authors": [
"adambkaplan",
"qu1queee"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10756",
"repo": "shipwright-io/community",
"url": "https://github.com/shipwright-io/community/issues/164"
}
|
gharchive/issue
|
September 4th, 2023 Community Meeting
Please add a topic in this thread and add a link to the GitHub issue associated with the topic.
Please make sure you give folks enough time to review/discuss the topic offline on GitHub before coming into the meeting
(optional) Paste the image of an animal 😸
I commented on the action for cleaning up nightly releases - keeping the past 3 months is a good start.
|
2025-04-01T04:35:29.382777
| 2024-02-16T10:03:24
|
2138259383
|
{
"authors": [
"adambkaplan",
"qu1queee"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10757",
"repo": "shipwright-io/community",
"url": "https://github.com/shipwright-io/community/issues/189"
}
|
gharchive/issue
|
February 19th, 2024 Community Meeting
Please add a topic in this thread and add a link to the GitHub issue associated with the topic.
Please make sure you give folks enough time to review/discuss the topic offline on GitHub before coming into the meeting
(optional) Paste the image of an animal 😸
Z-stream for build v0.12.0 - to fix CVE-2023-49569:
Feature request: https://github.com/shipwright-io/build/issues/1496
Backport request: https://github.com/shipwright-io/build/issues/1498
Question - should we draft a SHIP for this?
Meeting minutes:
Quick meeting today. Only the above items were discussed.
For https://github.com/shipwright-io/build/issues/1496 , we previously concluded that a SHIP is required, see https://github.com/shipwright-io/community/issues/85 .
|
2025-04-01T04:35:29.395042
| 2023-04-08T08:32:02
|
1659446362
|
{
"authors": [
"necmettin",
"shivammathur"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10758",
"repo": "shivammathur/homebrew-extensions",
"url": "https://github.com/shivammathur/homebrew-extensions/issues/2673"
}
|
gharchive/issue
|
<EMAIL_ADDRESS>does not create the so/ini files
Describe the bug
after brew install<EMAIL_ADDRESS>a /usr/local/etc/php/8.2/conf.d/20-redis.ini should be created in the system, but The post-install step did not complete successfully
PHP versions
8.2.4 from shivammathur/php
To Reproduce
brew tap shivammathur/php
brew tap shivammathur/extensions
brew install<EMAIL_ADDRESS>brew install<EMAIL_ADDRESS>
Expected behavior
php -m should list redis as an installed module
Screenshots/Logs
Additional context
Are you willing to submit a PR?
I'm not really proficient with installing a PHP module in MacOS (or Ruby for that matter - I couldn't even find the post-install step in the Ruby file).
Please run brew doctor and fix any issues it reports, and then try again.
Please run brew doctor and fix any issues it reports, and then try again.
Had already done that, forgot to mention. Just did it again, no problems there. Anything else I can do to find the issue?
@necmettin
ok.
Try running the postinstall step with debug now and provide me the logs
brew postinstall --debug<EMAIL_ADDRESS>
Hello again,
After the command you gave, although brew doctor was not reporting any errors, postinstall was reporting unable to write, so I uninstalled homebrew, removed all artifacts and reinstalled homebrew and<EMAIL_ADDRESS><EMAIL_ADDRESS>is now reported as installed when I php -m. So it was a permission issue after all. Thank you very much.
|
2025-04-01T04:35:29.396822
| 2021-01-25T07:24:31
|
793113223
|
{
"authors": [
"shivammathur"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10759",
"repo": "shivammathur/homebrew-extensions",
"url": "https://github.com/shivammathur/homebrew-extensions/pull/193"
}
|
gharchive/pull-request
|
Update<EMAIL_ADDRESS>
Build<EMAIL_ADDRESS>
:beers: @BrewTestBot has triggered a merge.
:beers: @BrewTestBot has triggered a merge.
|
2025-04-01T04:35:29.401384
| 2024-06-07T16:45:11
|
2340828802
|
{
"authors": [
"shivammathur"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10760",
"repo": "shivammathur/homebrew-php",
"url": "https://github.com/shivammathur/homebrew-php/pull/2839"
}
|
gharchive/pull-request
|
Fix method visibility issue for<EMAIL_ADDRESS>
Build<EMAIL_ADDRESS>
:beers: @BrewTestBot has triggered a merge.
|
2025-04-01T04:35:29.409444
| 2023-03-17T13:13:20
|
1629296720
|
{
"authors": [
"jasongill",
"localheinz",
"shivammathur"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10761",
"repo": "shivammathur/setup-php",
"url": "https://github.com/shivammathur/setup-php/issues/713"
}
|
gharchive/issue
|
Consider determining and enabling PHP extensions depending on requirements in composer.json
Describe the feature
Similar to #629, perhaps it would be useful to disable all PHP extensions, and determine and enable PHP extensions depending on requirements in composer.json.
I understand that this could be a bit problematic, because some could have dependencies on PHP extensions that are not directly or indirectly documented in composer.json, for example:
for running PHARs
required by PHARs
recommended by PHARs
Version
[x] I have checked releases, and the feature is missing in the latest patch version of v2.
Underlying issue
n/a
Describe alternatives
Alternatives include manually disabling and then enabling all required PHP extensions.
Additional context
n/a
Are you willing to submit a PR?
n/a
I'm open to a PR that adds extensions based on composer.json.
We can add an input like extensions-file that supports composer.json as an input along with any file with a list of extensions.
This can be combined with none in the extensions input to have the desired effect. But that can be dangerous as you mentioned removing all extensions might break phar tools that are used in the pipeline in case they are not in composer.json.
Handling that would be the responsibility of the user. The action does try to enable extensions that are required for the supported tools though.
Thank you for your feedback, @shivammathur - perhaps it's too much work and not worth the effort, given the complications.
We use Composer's built in platform config option to force our local environments and development environments to match our production server version.
This makes it easy to grab the appropriate version from composer.json and pass it to php-setup:
- name: Get required PHP version
uses: sergeysova/jq-action@v2
id: php_version
with:
cmd: |
jq '.config.platform.php' composer.json -r
- name: Setup PHP runtime
uses: shivammathur/setup-php@v2
with:
php-version: "${{ steps.php_version.outputs.value }}"
Hope this helps someone else who is trying to find a way to force setup-php to match their configuration without having to keep their github actions yml up to date.
|
2025-04-01T04:35:29.411691
| 2015-04-02T00:18:17
|
65805096
|
{
"authors": [
"Bondifrench",
"shivkumarganesh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10762",
"repo": "shivkumarganesh/InfoVizGeek",
"url": "https://github.com/shivkumarganesh/InfoVizGeek/issues/1"
}
|
gharchive/issue
|
Javascript vector graphics library
You should add this one to your list:
https://github.com/andreaferretti/paths-js
License is Apache 2.0 from what I can see
Hi Dominik,
I have included the library as mentioned by you. Let me know if you have other libraries to be added to the list.
Link to the list is as follows:-
https://github.com/shivkumarganesh/InfoVizGeek#other-useful-javascript-components
Regards,
Shiv
|
2025-04-01T04:35:29.451544
| 2018-03-04T14:58:41
|
302092225
|
{
"authors": [
"gf712",
"grg121",
"karlnapf",
"vigsterkr",
"yyanwcy521"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10763",
"repo": "shogun-toolbox/shogun",
"url": "https://github.com/shogun-toolbox/shogun/issues/4197"
}
|
gharchive/issue
|
install shogun fail.
the error:
src/interfaces/python/CMakeFiles/_interface_python.dir/build.make:73: recipe for target 'src/interfaces/python/CMakeFiles/_interface_python.dir/shogunPYTHON_wrap.cxx.o' failed
make[2]: *** [src/interfaces/python/CMakeFiles/_interface_python.dir/shogunPYTHON_wrap.cxx.o] Error 1
CMakeFiles/Makefile2:465: recipe for target 'src/interfaces/python/CMakeFiles/_interface_python.dir/all' failed
make[1]: *** [src/interfaces/python/CMakeFiles/_interface_python.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2
i follow the steps:
1.mkdir build
2.cd build
3.cmake -DINTERFACE_PYTHON=ON ..
then what happend as above error.
what should i do? thank you.
i install at ubuntu 16.04
We would either need a full error message (use a github gist), or you can install the binary version of shogun if you have trouble compiling it from scratch. There is both a ppa and a conda package
Thank you very much!
I reinstall my system and only install ppa,then according to the steps:
cmake -DINTERFACE_PYTHON=ON ..
make
sudo make install
the progress bar is 100% ,like this:
[100%] Linking CXX executable converter/cpp-converter-independent_component_analysis_fast
[100%] Built target cpp-converter-independent_component_analysis_fast
Scanning dependencies of target cpp-binary-averaged_perceptron
[100%] Building CXX object examples/meta/cpp/CMakeFiles/cpp-binary-averaged_perceptron.dir/binary/averaged_perceptron.cpp.o
[100%] Linking CXX executable binary/cpp-binary-averaged_perceptron
[100%] Built target cpp-binary-averaged_perceptron
Scanning dependencies of target cpp-statistical_testing-quadratic_time_maximum_mean_discrepancy
[100%] Building CXX object examples/meta/cpp/CMakeFiles/cpp-statistical_testing-quadratic_time_maximum_mean_discrepancy.dir/statistical_testing/quadratic_time_maximum_mean_discrepancy.cpp.o
[100%] Linking CXX executable statistical_testing/cpp-statistical_testing-quadratic_time_maximum_mean_discrepancy
[100%] Built target cpp-statistical_testing-quadratic_time_maximum_mean_discrepancy
[100%] Compiled generated cpp examples
[100%] Built target build_cpp_meta_examples
But when I import shogun in Python,there something wrong:
the error :
import shogun
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/shogun.py", line 20, in
_shogun = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/shogun.py", line 19, in swig_import_helper
return importlib.import_module('_shogun')
File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
ImportError: /usr/local/lib/python2.7/dist-packages/_shogun.so: undefined symbol: ZNK6shogun9CSGObject6equalsEPKS0
I don't know what to do. And I just want to import some mudule to run some code, like:
from shogun.Features import *
from shogun.Kernel import *
from shogun.Classifier import *
from shogun.Evaluation import *
from modshogun import StringCharFeatures, RAWBYTE
from modshogun import MulticlassLabels
from shogun.Kernel import SSKStringKernel
When I install shogun successfully , modshogun module exist?
Are you using python 2 or python 3 (?) In Ubuntu17 I had that problem: When installing shogun in the way you do, I could import it in python2 but no in python 3... I had to execute cmake in this way:
cmake -DINTERFACE_PYTHON=ON BUILD_META_EXAMPLES=ON -DENABLE_TESTING=ON -DCMAKE_BUILD_TYPE=Debug -DBUILD_DASHBOARD_REPORTS=ON DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python3.6 -DPYTHON_PACKAGES_PATH=/usr/local/lib/python3.6/dist-packages .. ``
to get shogun working in python 3
Some notes in my blog
:)
ps: -BUILD_META_EXAMPLES, -DBUILD_DASHBOARD_REPORTS=ON , -DCMAKE_BUILD_TYPE=Debug and -DENABLE_TESTING=ON are not necessary.
@yyanwcy521 if you just want to use shogun from python why dont you use either the ppa package or the conda package?
@yyanwcy521 on the other hand can you paste the content of the build/src/shogun/lib/versionstring.h file here
seems like we will never know what happened here!
|
2025-04-01T04:35:29.453590
| 2015-03-02T04:10:55
|
59436709
|
{
"authors": [
"iglesias",
"yingryic"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10764",
"repo": "shogun-toolbox/shogun",
"url": "https://github.com/shogun-toolbox/shogun/pull/2754"
}
|
gharchive/pull-request
|
SGSparseMatrix unit test
I have add two unit tests for SGSparse matrix, for from_dense and get_transpose member function. Also changed access_by_index to test non square matrix.
The tests follow the SGSparse matrix pattern sparseMatrix(colIndex, rowIndex) rather than dense matrix pattern denseMatrix(rowIndex, colIndex)
may add more test later.. feel free to inform me if need further modification.
Also, I think it is easier to think about sparse matrices in terms of features and vectors instead of rows and columns.
|
2025-04-01T04:35:29.485729
| 2022-03-01T14:55:17
|
1155437403
|
{
"authors": [
"AndreasA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10766",
"repo": "shopware/platform",
"url": "https://github.com/shopware/platform/issues/2366"
}
|
gharchive/issue
|
SystemUpdateFinish command should also support/have the flag --skip-asset-build as plugin:install / update do
PHP Version
8.1
Shopware Version
<IP_ADDRESS>
Expected behaviour
system:update:finish command should also support the flag --skip-asset-build and bypass assets compile in the PostUpdateFinish event using the corresponding state.
This would be nice in automated deployments as the assets build is very often triggered manually then anyway.
Actual behaviour
No flag --skip-asset-build.
How to reproduce
See expected bahviour
Created new PR after accidentally deleting my forked repository too early: https://github.com/shopware/platform/pull/2515
|
2025-04-01T04:35:29.494932
| 2021-01-13T12:42:16
|
785072241
|
{
"authors": [
"J-Rahe",
"acris-cp",
"philipgatzka",
"shopwareBot"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10767",
"repo": "shopware/platform",
"url": "https://github.com/shopware/platform/pull/1593"
}
|
gharchive/pull-request
|
The products data in the cart are loaded again from the database on every request which is made in the shop
1. Why is this change necessary?
The products data in the cart are loaded again from the database on every request which is made in the shop (e.g. search, open a simple shop page, open a product page, ...)
2. What does this change do, exactly?
Optimisation of the function isComplete in the file src/core/Content/Product/Cart/ProductCartProcessor.php and some other things which have to be done according to this.
3. Describe each step to reproduce the issue or behaviour.
The problem is mainly due to the function isComplete in src/core/Content/Product/Cart/ProductCartProcessor.php.
There it asks whether the description of a product line item is null. However, this is ALWAYS zero, as it is never set for a product line item!
This has the consequence that Shopware reloads all products of the shopping basket in the line $products = $this->productGateway->get($ids, $context); in the method collect in the file mentioned above with EVERY request (i.e. also with the search, in the account area, every shop page, etc.).
This is exactly the opposite of what the new shopping cart was actually built for. An improvement brings a performance boost to every request made.
4. Please link to the relevant issues (if any).
https://issues.shopware.com/issues/NEXT-13250
5. Checklist
[ ] I have written tests and verified that they fail without my change
[x] I have squashed any insignificant commits
[x] I have created a changelog file with all necessary information about my changes
[ ] I have written or adjusted the documentation according to my changes
[ ] This change has comments for package types, values, functions, and non-obvious lines of code
[x] I have read the contribution requirements and fulfil them.
Hello,
thank you for creating this pull request.
I have opened an issue on our Issue Tracker for you. See the issue link: https://issues.shopware.com/issues/NEXT-13267
Please use this issue to track the state of your pull request.
Hello,
thank you for creating this pull request.
I have opened an issue on our Issue Tracker for you. See the issue link: https://issues.shopware.com/issues/NEXT-13267
Please use this issue to track the state of your pull request.
Hey @acris-cp, yould you please take a look at the failing test and the csfixer and add some tests.
Hey @acris-cp, yould you please take a look at the failing test and the csfixer and add some tests.
Hello, we're closing this pull request due to inactivity. If this change is still important to you, feel free to create a new pull request.
For more information about our contribution guidelines, see https://docs.shopware.com/en/shopware-platform-dev-en/contribution/contribution-guideline.
Hello, we're closing this pull request due to inactivity. If this change is still important to you, feel free to create a new pull request.
For more information about our contribution guidelines, see https://docs.shopware.com/en/shopware-platform-dev-en/contribution/contribution-guideline.
|
2025-04-01T04:35:29.512013
| 2020-10-27T08:57:21
|
730249648
|
{
"authors": [
"shouldibeworried",
"t-animal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10768",
"repo": "shouldibeworried/covid19",
"url": "https://github.com/shouldibeworried/covid19/issues/2"
}
|
gharchive/issue
|
Do you have RKI Corona county data?
I'm developing https://covid-karte.de and want to implement a timelapse-feature like you have, but not just with the total infections, but all the available data from that time. Do you have the historical RKI county data by any chance?
Hi @t-animal, I haven't actually added the timelapse feature to my website yet, so if you make it available via your website, that would be awesome! I'm using RKI data from here: https://opendata.arcgis.com/datasets/dd4580c810204019a7b8eb3e0b329dd6_0.csv
It's broken down by individual reports from the health authorities to RKI, so it's super granular. I used pandas to aggregate it like so:
df = pd.read_csv(source_file, parse_dates=["Meldedatum"])
pv = pd.pivot_table(df, values="AnzahlFall",
index=["Meldedatum"],
columns=["IdLandkreis"], aggfunc=np.sum, fill_value=0)
Yes, I know and use this data, unfortunately you can't reliably reconstruct all old county data from it.
|
2025-04-01T04:35:29.516236
| 2021-03-05T04:47:20
|
822740937
|
{
"authors": [
"shounakmulay",
"syleishere"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10769",
"repo": "shounakmulay/Telephony",
"url": "https://github.com/shounakmulay/Telephony/issues/56"
}
|
gharchive/issue
|
Fails on latest version of flutter/dart
Because no versions of path_provider match >2.0.1 <3.0.0 and path_provider 2.0.1 depends on path_provider_platform_interface ^2.0.0, path_provider ^2.0.1 requires path_provider_platform_interface ^2.0.0.
And because path_provider_platform_interface >=2.0.0 depends on platform ^3.0.0 and telephony >=0.0.8 depends on platform ^2.2.1, path_provider ^2.0.1 is incompatible with telephony >=0.0.8.
So, because musicplayer depends on both telephony ^0.0.8 and path_provider ^2.0.1, version solving failed.
pub get failed (1; So, because musicplayer depends on both telephony ^0.0.8 and path_provider ^2.0.1, version solving failed.)
This is a dependency conflict.
path_provider needs platform 3.0.0 which is a null safe version.
telephony needs platform 2.0.1 which is not null safe.
Looks like you migrated some dependencies to null safe version which is causing the conflict.
Telephony as of now is not null safe.
@syleishere I have published a new version of telephony that is null safe.
Try using version 0.1.0. You should not have the problem anymore.
|
2025-04-01T04:35:29.521438
| 2023-03-31T13:23:22
|
1649351814
|
{
"authors": [
"jackiewangCV",
"zhangjiewu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10770",
"repo": "showlab/Tune-A-Video",
"url": "https://github.com/showlab/Tune-A-Video/issues/47"
}
|
gharchive/issue
|
can this repo work on windows cpu machine?
I am trying to this repo on my window 10 cpu machine with pre-trained weight
Can this repo work on windows CPU machine?
might work if you just want to generate a few frames (less than 5, i guess), but it will certainly take a very long time to run without gpu. if you do not have a gpu on your local machine, you can use free gpu on colab.
Hi, Jay
Thanks for your email
On Sat, Apr 1, 2023 at 1:01 PM Jay Z. Wu @.***> wrote:
might work if you just want to generate a few frames (less than 5, i
guess), but it will certainly take a very long time to run without gpu. if
you do not have a gpu on your local machine, you can use free gpu on colab. [image:
Open In Colab]
https://colab.research.google.com/github/showlab/Tune-A-Video/blob/main/notebooks/Tune-A-Video.ipynb
—
Reply to this email directly, view it on GitHub
https://github.com/showlab/Tune-A-Video/issues/47#issuecomment-1492833641,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/A6J5BHINFCMF6ZCHMVV6CHDW66ZEBANCNFSM6AAAAAAWOU4JBU
.
You are receiving this because you authored the thread.Message ID:
@.***>
|
2025-04-01T04:35:29.528894
| 2015-12-02T18:53:58
|
120011181
|
{
"authors": [
"SenorSamuel",
"shpakovski",
"zoul"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10771",
"repo": "shpakovski/MASShortcut",
"url": "https://github.com/shpakovski/MASShortcut/issues/79"
}
|
gharchive/issue
|
Why don’t we allow shortcuts with Shift and Alt modifiers?
Recently I have found that MASShortcut doesn’t record shortcuts such as Shift-Alt-U. The reason is in the validation logic in MASShortcutValidator. I can understand why we disallow Alt-something, but shouldn’t Shift-Alt-something work? @shpakovski, do you have an idea why Shift-Alt-something is not considered valid? Thanks!
@zoul I did not want to encourage these shortcuts because Shift-Option works just like Option. For example, Shift-Option-K = .
Makes sense, thank you! (I will document the behaviour so that people know the thinking behind it.) I noticed the allowAny… switch, but I wasn’t sure what the exact deal was, what were the potential drawbacks, so I didn’t touch it. I’ll leave the issue open as a reminder to update the docs.
Cool, thanks!
that's really cool
|
2025-04-01T04:35:29.529852
| 2022-10-05T07:25:18
|
1397356273
|
{
"authors": [
"dhruvdabhi101"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10772",
"repo": "shravan2524/hacktoberfest",
"url": "https://github.com/shravan2524/hacktoberfest/pull/19"
}
|
gharchive/pull-request
|
added prime number question
Solving issue #4
please check code and merge.
@shravan2524 hello brother. Please check PR and merge it.
|
2025-04-01T04:35:29.561598
| 2014-05-18T23:50:31
|
33764045
|
{
"authors": [
"buffonomics",
"dfyx",
"dregules",
"jhigman"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10773",
"repo": "shuky19/sublime_debugger",
"url": "https://github.com/shuky19/sublime_debugger/issues/23"
}
|
gharchive/issue
|
Ruby version: 2.1.0 is not supported.
As the title says. I don't see why the plugin shouldn't at least try to let me debug ruby 2.1.0. There aren't that many changes and byebug works just fine.
OKay I'm getting
Stopping...
Ruby version: 2.2.0 is not supported.
Not sure how to use rube discoverer file exactly.
This is my ruby version:
$ ruby -v
ruby 2.2.0p0 (2014-12-25 revision 49005) [x86_64-darwin14]
So I change the ruby_version_discoverer.rb file to:
puts 2.2.0
Now I get
Stopping...
Ruby version: is not supported.
It doesn't even acknowledge the version i entered in the report...
I had to add my Ruby version (2.1.1) to the list of supported versions in the "Ruby Debugger.sublime-settings" file, so it looked like this:
"supported_ruby_versions": ["2.1.1", "2.1.0", "2.0.0", "1.9.3"]
After that the debugger started ok.
@jhigman - Also did that for 2.2.1 and I don't seem to get it to work.
For all I get is this below, every time I Debug:
Connecting...
Connected
/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/interface.rb:67:in `write': closed stream (IOError)
Last exception: #<IOError: closed stream>
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/interface.rb:67:in `puts'
Backtrace:
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/interface.rb:67:in `puts'
["/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/interface.rb:67:in `write'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/interface.rb:67:in `puts'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/interface.rb:67:in `puts'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/commands/list.rb:115:in `display_lines'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/commands/list.rb:26:in `execute'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:79:in `block in always_run'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:79:in `each'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:79:in `always_run'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:90:in `process_commands'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:52:in `at_return'", "/Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/context.rb:94:in `at_return'", "/Users/diegoregules/Library/Application Support/Sublime Text 3/Packages/Ruby Debugger/sublime_debug_require.rb:14:in `block in <top (required)>'", "/Users/diegoregules/Library/Application Support/Sublime Text 3/Packages/Ruby Debugger/sublime_debug_require.rb:37:in `call'", "/Users/diegoregules/Library/Application Support/Sublime Text 3/Packages/Ruby Debugger/sublime_debug_require.rb:37:in `<top (required)>'", "/Users/diegoregules/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'", "/Users/diegoregules/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'"]
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/commands/list.rb:115:in `display_lines'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/commands/list.rb:26:in `execute'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:79:in `block in always_run'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:79:in `each'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:79:in `always_run'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:90:in `process_commands'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/processors/command_processor.rb:52:in `at_return'
from /Users/diegoregules/.rvm/gems/ruby-2.2.1/gems/byebug-5.0.0/lib/byebug/context.rb:94:in `at_return'
from /Users/diegoregules/Library/Application Support/Sublime Text 3/Packages/Ruby Debugger/sublime_debug_require.rb:14:in `block in <top (required)>'
from /Users/diegoregules/Library/Application Support/Sublime Text 3/Packages/Ruby Debugger/sublime_debug_require.rb:37:in `call'
from /Users/diegoregules/Library/Application Support/Sublime Text 3/Packages/Ruby Debugger/sublime_debug_require.rb:37:in `<top (required)>'
from /Users/diegoregules/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
from /Users/diegoregules/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
Debugger stopped
Does it happen to you too? @shuky19 any insight on this greatly appreciated. Thank you in advance!
|
2025-04-01T04:35:29.571500
| 2017-10-05T12:47:54
|
263112415
|
{
"authors": [
"jnorman-nyc",
"shusain93",
"vrchlabak"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10774",
"repo": "shusain93/Andromeda-iMessage",
"url": "https://github.com/shusain93/Andromeda-iMessage/issues/13"
}
|
gharchive/issue
|
Security
What’s your take on the security of passing the message off from iMessage (end to end encryption) to Andromeda (no encryption?)? Thanks and great project.
This project is 100% security through obscurity. I hope soon to upgrade to SSL sockets and server but it’s a time a issue and will require a rewrite. The project is small enough that unless someone specifically wants to attack YOU it’s safe.
Also is this a question or a complaint? Haha.
Oh it was a total question. I’m not familiar with how the iMessage would be handed off. Thanks!
Yeah, it's on the roadmap. I fully aware that unencrypted iMessage isn't great but this was really a hack.
Huh. I wonder if you could use another messaging app with encryption (like Signal) to hand off the iMessage to it? (I am obviously not a programmer so thank you for humoring me).
Hi:
Have you tested the server with MacOS High Sierra? Does it still function like Sierra?
Please let me know.
Thanks!
On Oct 5, 2017, at 7:34 AM, Salman Husain<EMAIL_ADDRESS>wrote:
Yeah, it's on the roadmap. I fully aware that unencrypted iMessage isn't great but this was really a hack.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub https://github.com/shusain93/Andromeda-iMessage/issues/13#issuecomment-334484085, or mute the thread https://github.com/notifications/unsubscribe-auth/AG0HasPVqjT7UlgA4UviObEsm25OdsgSks5spOj_gaJpZM4PvBl1.
Please create a new issue next time. Do not pick a random issue. No, I have not checked. If it's broken let me know.
|
2025-04-01T04:35:29.581168
| 2020-04-09T16:16:49
|
597396017
|
{
"authors": [
"3v1n0",
"shuveb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10775",
"repo": "shuveb/io_uring-by-example",
"url": "https://github.com/shuveb/io_uring-by-example/issues/4"
}
|
gharchive/issue
|
Polled IO examples
Very nice work here, would be nice to have some examples also when using polled IO
That and much more! Please see a new guide I released a while back: https://unixism.net/loti/
|
2025-04-01T04:35:29.643797
| 2023-07-28T10:33:11
|
1826187832
|
{
"authors": [
"dominikgronkiewicz",
"shyamtawli"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10776",
"repo": "shyamtawli/devFind",
"url": "https://github.com/shyamtawli/devFind/pull/330"
}
|
gharchive/pull-request
|
add: Dominik Gronkiewicz
Description
[Include a brief description of the changes you've made]
Related Issues
[Include any related issues or pull requests that this PR addresses or is related to. Use GitHub's shorthand syntax to link to them, like #1234.]
Changes Proposed
[Describe the changes you've made in detail. Be specific and include any relevant code snippets.]
Checklist
[ ] I have read and followed the Contribution Guidelines.
[ ] All new and existing tests passed.
[ ] I have updated the documentation to reflect the changes I've made.
[ ] My code follows the code style of this project.
[ ] The title of my pull request is a short description of the requested changes.
Screenshots
Note to reviewers
@dominikgronkiewicz Thank you so much for your contribution! Your efforts are greatly appreciated and will go a long way in improving our project. Please feel free to share it with others and Star the repo to help us grow even further!
|
2025-04-01T04:35:29.658439
| 2022-08-30T09:01:35
|
1355421231
|
{
"authors": [
"gattia",
"zexinyang"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10777",
"repo": "siavashk/pycpd",
"url": "https://github.com/siavashk/pycpd/issues/72"
}
|
gharchive/issue
|
[JOSS Review] Minor issues
Hi @gattia and @siavashk,
This is part of the JOSS review process. I found two minor issues after testing your PyCPD package. Could you please address them?
Did you mean "!="?
if R is not None and (R.ndim is not 2 or R.shape[0] is not self.D or R.shape[1] is not self.D or not is_positive_semi_definite(R)):
/home/geo3d/Zexin/project/pycpd/pycpd/rigid_registration.py:49: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if t is not None and (t.ndim is not 2 or t.shape[0] is not 1 or t.shape[1] is not self.D):
/home/geo3d/Zexin/project/pycpd/pycpd/rigid_registration.py:49: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if t is not None and (t.ndim is not 2 or t.shape[0] is not 1 or t.shape[1] is not self.D):
/home/geo3d/Zexin/project/pycpd/pycpd/affine_registration.py:30: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if B is not None and (B.ndim is not 2 or B.shape[0] is not self.D or B.shape[1] is not self.D or not is_positive_semi_definite(B)):
/home/geo3d/Zexin/project/pycpd/pycpd/affine_registration.py:34: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if t is not None and (t.ndim is not 2 or t.shape[0] is not 1 or t.shape[1] is not self.D):
/home/geo3d/Zexin/project/pycpd/pycpd/affine_registration.py:34: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if t is not None and (t.ndim is not 2 or t.shape[0] is not 1 or t.shape[1] is not self.D):
@zexinyang thanks so much for the review, and for looking into this!
I have mostly completed all components of the review and should be submitting my response shortly.
For the errors here, I tried to re-create the errors and couldn't. In the end, it looks like you might have an old version of pycpd installed because the lines of code that have an error don't match the current code. Specifically, there should be an additional if/else statement in the def transform_point_cloud function that tests if you are trying to transform a low-rank version or not.
Could you try re-pulling, installing, and then running the examples/tests?
# activate your environment
cd /home/geo3d/Zexin/project/pycpd
git pull origin master
pip install .
make dev # optionally install pytest using make dev if you haven't already.
make test # run the tests.
python examples/fish_deformable_3D_register_with_subset_of_points.py
@gattia Thank you for addressing the issue. Now it works perfectly ;)
|
2025-04-01T04:35:29.665983
| 2023-08-03T09:56:26
|
1834718749
|
{
"authors": [
"shyakadavis",
"sibiraj-s"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10778",
"repo": "sibiraj-s/svelte-tiptap",
"url": "https://github.com/sibiraj-s/svelte-tiptap/issues/29"
}
|
gharchive/issue
|
Help with the isActive reactivity
Hi.
First and foremost, thank you so much for the project. It has been a great joy using it.
That said, I ran into some trouble trying to create the style toggle items.
From the examples, you create individual functions, say:
const toggleHeading = (level: 1 | 2) => {
return () => {
$editor.chain().focus().toggleHeading({ level }).run();
};
};
and then create a button to toggle:
<button
type="button"
class={cx('hover:bg-black hover:text-white w-7 h-7 rounded', {
'bg-black text-white': isActive('heading', { level: 1 })
})}
on:click={toggleHeading(1)}
>
H1
</button>
Now, given that I want to create several menu items, I don't want the trouble of repeating the above snippet2 for every item.
So, I created an array as such:
const menuItems = [
{
name: 'heading 1',
command: toggleHeading(1),
content: 'Heading 1',
icon: Heading1,
active: () => isActive('heading', { level: 1 })
},
{
name: 'heading 2',
command: toggleHeading(2),
content: 'Heading 2',
icon: Heading2,
active: () => isActive('heading', { level: 2 })
},
{
name: 'heading 3',
command: toggleHeading(3),
content: 'Heading 3',
icon: Heading3,
active: () => isActive('heading', { level: 3 })
},
{
name: 'heading 4',
command: toggleHeading(4),
content: 'Heading 4',
icon: Heading4,
active: () => isActive('heading', { level: 4 })
},
{
name: 'heading 5',
command: toggleHeading(5),
content: 'Heading 5',
icon: Heading5,
active: () => isActive('heading', { level: 5 })
},
{
name: 'heading 6',
command: toggleHeading(6),
content: 'Heading 6',
icon: Heading6,
active: () => isActive('heading', { level: 6 })
},
{
name: 'bold',
command: toggleBold,
content: 'Bold',
icon: Bold,
active: () => isActive('bold')
},
{
name: 'italic',
command: toggleItalic,
content: 'Italic',
icon: Italic,
active: () => isActive('italic')
},
{
name: 'code',
command: toggleCodeBlock,
content: 'Code',
icon: Code,
active: () => isActive('codeBlock')
},
{
name: 'strike',
command: toggleStrike,
content: 'Strike',
icon: Strikethrough,
active: () => isActive('strike')
},
{
name: 'bulletList',
command: toggleBulletList,
content: 'Bullet List',
icon: List,
active: () => isActive('bulletList')
},
{
name: 'orderedList',
command: toggleOrderedList,
content: 'Ordered List',
icon: ListOrdered,
active: () => isActive('orderedList')
},
{
name: 'paragraph',
command: setParagraph,
content: 'Paragraph',
icon: Pilcrow,
active: () => isActive('paragraph')
}
];
so that I can iterate in the markup like so:
{#if editor && editable}
<TooltipProvider>
<ul class="flex p-2 border-2 border-b-0 rounded-t-md">
{#each menuItems as item (item.name)}
<li>
<Tooltip>
<TooltipTrigger>
<button
class={cn(
'hover:bg-black hover:text-white w-7 h-7 rounded grid place-items-center',
{
'bg-black text-white': item.active()
}
)}
on:click={item.command}
>
<svelte:component this={item.icon} class="w-4 h-4" />
<span class="sr-only">{item.content}</span>
</button>
</TooltipTrigger>
<TooltipContent>
<p>{item.content}</p>
</TooltipContent>
</Tooltip>
</li>
{/each}
</ul>
</TooltipProvider>
{/if}
The problem I ran into, is that when doing it like that, the isActive store doesn't seem to work. What I mean is:
if, say, H1, and B are active, then the background doesn't register.
Also, the P that usually gets active whenever you jump to a new line, doesn't seem to register as well.
Any help is greatly appreciated.
Here is a link to a detailed Gist
Thanks.
I am not sure, if this is an issue with the package. Also, thanks for the detailed code, And I don't have lot of spare time ATM, It would be greatly appreciated if you could move this code to a stackblitz and provide a working example so it would help me assist faster.
You can include the part (for loop) version that you are trying to get working. No need to included the one that is commented out in gist.
Hey, @sibiraj-s. Here is the StackBlitz link.
Please let me know if something is wrong or the like.
Thanks.
The stackblitz doesn't load for some reason.
|
2025-04-01T04:35:29.667214
| 2016-02-23T05:26:24
|
135639086
|
{
"authors": [
"xhub"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10779",
"repo": "siconos/siconos",
"url": "https://github.com/siconos/siconos/issues/12"
}
|
gharchive/issue
|
[swig] NumericsMatrix structures and functions wrapped in the kernel
It looks like NumericsMatrix.h and maybe (another header) is wrapped in the kernel. This is already done in numerics and should therefore be removed.
Should be fixed now
|
2025-04-01T04:35:29.668246
| 2016-03-12T23:21:16
|
140426109
|
{
"authors": [
"xhub"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10780",
"repo": "siconos/siconos",
"url": "https://github.com/siconos/siconos/issues/21"
}
|
gharchive/issue
|
[CI] add pkg to ci_config
for some config, a particular package must be installed, otherwise the build will fail at the configure phase, The two have to be synchronize. It would be better if both information are in the same data structure.
Thanks for writing this documentation. I think we should put that in the CI/README. I'm closing this issue because I don't think it is relevant anymore
|
2025-04-01T04:35:29.670755
| 2019-06-20T13:49:44
|
458670493
|
{
"authors": [
"bremond",
"vacary"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10781",
"repo": "siconos/siconos",
"url": "https://github.com/siconos/siconos/issues/296"
}
|
gharchive/issue
|
Other kind of matrices for MassMatrix in BouncingBallTS
If one tries to build a banded matrix in the BouncingBall:
SP::SiconosMatrix Mass(new SimpleMatrix(nDof, nDof, Siconos::BANDED));
(which would be something as a tremendous optimization...),
this fails.
The problem is in SimpleMatrixSolvers.cpp:50
void SimpleMatrix::PLUFactorizationInPlace()
where the case Siconos::BANDED is not checked, only the dense and sparse cases are checked.
Moreover, in the sparse case only the symmetric case is treated
int info = cholesky_decompose(*sparse());
// \warning: VA 24/11/2010: work only for symmetric matrices. Should be replaced by efficient implementatation (e.g. mumps )
``
More generally, we should improve the simpleMatrix to provide more linear solvers.
The #319 PR should help to take into account
|
2025-04-01T04:35:29.674960
| 2021-04-28T00:20:51
|
869373424
|
{
"authors": [
"Luis-GA",
"dbluhm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10782",
"repo": "sicpa-dlab/aries-cloudagent-python",
"url": "https://github.com/sicpa-dlab/aries-cloudagent-python/issues/78"
}
|
gharchive/issue
|
Task: Relative DID URL support
In DID Documents, it is valid for a DID URL to omit the DID when referencing another part of the same document.
For example:
{
"@context": "...",
"id": "did:example:123",
"verificationMethod": [
{
"id": "#key-1",
...
}
],
"authentication": [
"#key-1"
]
}
With the last version of pydid. replicating your example:
{
"@context": "https://w3id.org/did/v1",
"id": "did:example:123",
"verificationMethod": [
{
"id": "#key-1",
"type": "EcdsaSecp256k1VerificationKey2019",
"publicKeyBase58": "020a5"
}
],
"authentication": [
"#key-1"
]
}
It founds error due:
DID could not be parsed from URL #key-1 for dictionary value @ data['verificationMethod'][0]['id']
pydid.doc.verification_method.VerificationMethodValidationError: Failed to validate verification method: expected a dictionary (However if the id is complete as "did:example:123#key-1", it is validated properly)
The most clear example that pydid validates properly:
{
"@context": "https://w3id.org/did/v1",
"id": "did:example:123",
"verificationMethod": [
{
"id": "did:key:test#key-1",
"type": "EcdsaSecp256k1VerificationKey2019",
"publicKeyBase58": "020a5"
}
],
"authentication": [
"did:key:test#key-1"
]
}
I have not clear if it is a task for aries-cloudagent-python or for the py-did
|
2025-04-01T04:35:29.684791
| 2023-03-08T20:33:16
|
1615895453
|
{
"authors": [
"FabioPinheiro",
"dkulic",
"yvgny"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10783",
"repo": "sicpa-dlab/didcomm-python",
"url": "https://github.com/sicpa-dlab/didcomm-python/pull/93"
}
|
gharchive/pull-request
|
DidCommV2 doesn't specify 'typ'
DidCommV2 doesn't specify 'typ'
This makes it incompatible with other implementations
I didn't test anything so someone needs to figure out how to make this according to the specification.
@FabioPinheiro Thanks for your contribution!
From what I read in the specs, DIDComm may use the typ field (see https://identity.foundation/didcomm-messaging/spec/#iana-media-types). So maybe we should make this optional instead of removing completely?
yes and no
I'm basically only removing the check. The line 105 is still there: https://github.com/sicpa-dlab/didcomm-python/pull/93/files#diff-cb2b8c37b8a59e04819d6634da7909e96a364413c241a5a17a33c5e7685a4c1eR105
However, JOSE specs recommends to use typ on the outermost envelope to make JOSE structure formats self-describing.
We are adding to the content of the encrypted bytes...
On the outermost envelope, we can have "typ":"application/didcomm-encrypted+json"
{
"ciphertext":"...",
"protected":"eyJlcGsiOnsia3R5IjoiT0tQIiwiY3J2IjoiWDI1NTE5IiwieCI6IndtSU90Sm03MVUwTGI3NDJqYThudFo1NjlGWDlqRjlBVVJ0cm1WYTluU28ifSwiYXB2IjoiLWNOQ3l0eFVrSHpSRE5SckV2Vm05S0VmZzhZcUtQVnVVcVg1a0VLbU9yMCIsInR5cCI6ImFwcGxpY2F0aW9uL2RpZGNvbW0tZW5jcnlwdGVkK2pzb24iLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIiwiYWxnIjoiRUNESC1FUytBMjU2S1cifQ",
"recipients":[{...}],
"tag":"...",
"iv":"..."
}
protected -> is decoded to {"epk":{"kty":"OKP","crv":"X25519","x":"wmIOtJm71U0Lb742ja8ntZ569FX9jF9AURtrmVa9nSo"},"apv":"-cNCytxUkHzRDNRrEvVm9KEfg8YqKPVuUqX5kEKmOr0","typ":"application/didcomm-encrypted+json","enc":"A256CBC-HS512","alg":"ECDH-ES+A256KW"}
But in this implementation "typ":"application/didcomm-plain+json" its required to be hard coded on the content (from ciphertext).
If you go to all the 'headers' https://identity.foundation/didcomm-messaging/spec/#message-headers and all the additional headers from the DID Comm extensions typ does not exist.
So I assume this implementation is unable to decrypt any of the test vectors/examples of the DID Comm v2 Sepc: https://identity.foundation/didcomm-messaging/spec/#appendix-c-test-vectors
In sicpa-dlab/didcomm-jvm you still have this field... but at least doesn't seem to be a requirement.
https://github.com/sicpa-dlab/didcomm-jvm/blob/993265c75c12924f7ded917e3c54282c327ab315/lib/src/main/kotlin/org/didcommx/didcomm/message/Message.kt#L58
Closing, because of no activity on this PR, and it is reimplemented here: https://github.com/sicpa-dlab/didcomm-python/pull/96
I only do this in my free time and I had very little time on the last three weekends. But important thing is that it's fixed.
But you asked for tests and I was trying to create a test set that everyone could use. Would help us to achieve interoperability at the DID Comm libraries level.
If you could contribute with your test set would be nice https://github.com/decentralized-identity/didcomm.org/issues/77
No problem Fabio, like you said, it is important that it's fixed now 👍
When will there be a release with the fix?
@FabioPinheiro This will be released as soon as #98 is merged
|
2025-04-01T04:35:29.700621
| 2017-02-28T02:36:50
|
210669940
|
{
"authors": [
"chenminhua",
"shivid",
"shividhar"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10784",
"repo": "siddontang/go-mysql-elasticsearch",
"url": "https://github.com/siddontang/go-mysql-elasticsearch/issues/84"
}
|
gharchive/issue
|
log looks fine, but es don't get any data
mysql and es both run in docker,
mysqldump and go-mysql-elasticsearch both in host machine (mac os)
and the log look as follow
2017/02/28 10:15:28 binlogsyncer.go:72: [info] create BinlogSyncer with config &{1 mysql <IP_ADDRESS> 3306 root admin false false}
2017/02/28 10:15:28 status.go:52: [info] run status http server <IP_ADDRESS>:12800
2017/02/28 10:15:28 dump.go:96: [info] skip dump, use last binlog replication pos (mysql-bin.000001, 219233)
2017/02/28 10:15:28 sync.go:17: [info] start sync binlog at (mysql-bin.000001, 219233)
2017/02/28 10:15:28 binlogsyncer.go:236: [info] begin to sync binlog from position (mysql-bin.000001, 219233)
2017/02/28 10:15:28 binlogsyncer.go:131: [info] register slave for master server <IP_ADDRESS>:3306
2017/02/28 10:15:28 binlogsyncer.go:563: [info] rotate to (mysql-bin.000001, 219233)
2017/02/28 10:15:28 sync.go:57: [info] rotate binlog to (mysql-bin.000001, 219233)
but, es didn't get any data form mysql
my config file as follow
my_addr = "<IP_ADDRESS>:3306"
my_user = "root"
my_pass = "admin"
# Elasticsearch address
es_addr = "<IP_ADDRESS>:9200"
# Path to store data, like master.info, and dump MySQL data
data_dir = "./var"
# Inner Http status address
stat_addr = "<IP_ADDRESS>:12800"
# pseudo server id like a slave
server_id = 1
# mysql or mariadb
flavor = "mysql"
# mysqldump execution path
# if not set or empty, ignore mysqldump.
mysqldump = "mysqldump"
# MySQL data source
[[source]]
schema = "gc"
tables = ["Customer"]
index = ["customer"]
type = ["customer"]
btw, stat_addr = "<IP_ADDRESS>:12800" 有什么作用?
I have figured it out by myself!
thx anyway!
@chenminhua Could you say how you fixed the issue?
@chenminhua Could you say how you fixed the issue?
|
2025-04-01T04:35:29.703181
| 2019-05-03T01:06:58
|
439847227
|
{
"authors": [
"bejelith",
"siddontang"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10785",
"repo": "siddontang/go-mysql",
"url": "https://github.com/siddontang/go-mysql/issues/385"
}
|
gharchive/issue
|
schema.go assumes primary indexes are always returned first
https://github.com/siddontang/go-mysql/blob/c6ab05a85eb86dc51a27ceed6d2f366a32874a24/schema/schema.go#L358-L361
The current implementation assumes position 0 is always used for PRIMARY KEY when running show indexes
This is not documented mysql 8.0 docs: show indexes so potentially it's incorrect making this assumption
This may be a problem, but I have not met this now. :-)
We can forcibly move PRIMARY to the 0 position at first.
Ill submit a PR once i get a bit of time!
|
2025-04-01T04:35:29.765129
| 2020-04-17T20:30:03
|
602216553
|
{
"authors": [
"hcook",
"richardxia",
"rmac-sifive"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10786",
"repo": "sifive/soc-testsocket-sifive",
"url": "https://github.com/sifive/soc-testsocket-sifive/pull/18"
}
|
gharchive/pull-request
|
Change place function type to return LazyModule instead of Any
Also reverting my previous change of val attachParams's name
Adding @albertchen-sifive and @hcook as reviewers, since they know much more about Scala and these Chisel methods than I do.
Would it be better to define BlockDescriptor as BlockDescriptor[T <: LazyModule] so that we can get a concrete type value back in the cases where we do have more type information?
@richardxia updated with that to make this more flexible, this may help later if we ever use BlockDescriptor in places other than as a Seq() for BlockDescriptorKey
I don't object to any of this per se, but I think upstreaming all of this BlockAttachX code to be inside of or a peer to rocket-chip or sifive-blocks, while also making it use trait Attachable from rocket-chip is something we should do ASAP, to both make things more consistent across chisel and onboarded IPs, and to clean up the dependency chain so that soc-testsocket-sifive is no longer a dependency of the onboarded block repos, or sesame.
I'll coordinate with @rmac-sifive about a specific proposal along these lines.
|
2025-04-01T04:35:29.824088
| 2021-05-07T10:44:59
|
878760450
|
{
"authors": [
"geoffhigginbottom",
"rcastley"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10787",
"repo": "signalfx/observability-workshop",
"url": "https://github.com/signalfx/observability-workshop/issues/54"
}
|
gharchive/issue
|
Typos - Creating Team
Dashboards > Saving Charts > 3. Add a Team page
Small typos in 2nd sentence, it reads "So lets add your dashboard to the team page for easy acess later"
It should read "So let's add your dashboard to the team page for easy access later"
Added bold just to highlight changes required
Fixed in repo, please rebase!
|
2025-04-01T04:35:29.889789
| 2023-06-26T22:33:12
|
1775785925
|
{
"authors": [
"haydentherapper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10788",
"repo": "sigstore/community",
"url": "https://github.com/sigstore/community/pull/285"
}
|
gharchive/pull-request
|
Add Sigstore claimant model
The Sigstore claimant model formally defines a set of claims made by Sigstore's two transparency logs. This was created in collaboration with the creators of the claimant model.
This also includes sequence diagrams to make it easier to understand how the various actors interact. These are a work in progress and generated by a tool in Trillian.
Summary
Release Note
Documentation
cc @bobcallaway @SantiagoTorres
@mhutchinson @bobcallaway for another review
|
2025-04-01T04:35:29.892178
| 2023-03-03T06:03:40
|
1608001072
|
{
"authors": [
"cmoulliard",
"hectorj2f",
"znewman01"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10789",
"repo": "sigstore/cosign",
"url": "https://github.com/sigstore/cosign/issues/2765"
}
|
gharchive/issue
|
cosign policy init: main.go:74: error during command execution: unknown flag: --ns
Description
The cosign policy init command do not work:
cosign policy init -h
init is used to generate a root.json policy
for keyless signing delegation. This is used to establish a policy for a registry namespace,
a signing threshold and a list of maintainers who can sign over the body section.
Usage:
cosign policy init [flags]
Examples:
# extract public key from private key to a specified out file.
cosign policy init -ns <project_namespace> --maintainers {email_addresses} --threshold <int> --expires <int>(days)
...
but when we execute the command, we got such errors
cosign policy init -ns cosign --maintainers<EMAIL_ADDRESS>WARNING: the -ns flag is deprecated and will be removed in a future release. Please use the --ns flag instead.
Error: unknown flag: --ns
main.go:74: error during command execution: unknown flag: --ns
cosign policy init --ns cosign --maintainers<EMAIL_ADDRESS>Error: unknown flag: --ns
main.go:74: error during command execution: unknown flag: --ns
Version
GitVersion: 2.0.0
GitCommit: d6b9001f8e6ed745fb845849d623274c897d55f2
GitTreeState: "clean"
BuildDate: 2023-02-23T19:26:35Z
GoVersion: go1.20.1
Compiler: gc
Platform: darwin/amd64
Should be --namespace, need to update the docs.
Quick fix here: https://github.com/sigstore/cosign/pull/2774
|
2025-04-01T04:35:29.895114
| 2023-04-10T20:49:56
|
1661343170
|
{
"authors": [
"bdehamer",
"haydentherapper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10790",
"repo": "sigstore/sigstore-js",
"url": "https://github.com/sigstore/sigstore-js/issues/398"
}
|
gharchive/issue
|
Embed TUF targets to minimize downloads
Description
For the Golang library, we embed a copy of the target files, and check if the target file matches the target metadata here (a necessary check since the target file may have been updated and would need to be downloaded). Would it be possible to do the same for sigstore-js?
Yeah, love this idea!
|
2025-04-01T04:35:29.899107
| 2022-02-21T17:58:29
|
1146089329
|
{
"authors": [
"LinguList",
"gcelano"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10791",
"repo": "sigtyp/ST2022",
"url": "https://github.com/sigtyp/ST2022/issues/10"
}
|
gharchive/issue
|
How are files connected?
The logic of how the data are arranged in each folder in ST2022/data/ is not completely clear to me. Is it possible to get an example showing how the files are related?
For example, I understand that the test-x.tsv files contain the field ?, i.e., the one that needs to be predicted, and whose values are given in solutions-x.tsv. However, the files training-x.tsv seem to show a different logic, since there is no ? to predict. Furthermore, what do the result-x.tsv files contain?
Yes, the logic is that you:
train your model with the training data in whatever way (you can predict all items mutually, as our baseline does, make random samples, what you want)
predict the data in the test files
write predictions to a result-file
check against the solutions file
So if you want to train the model, how you do this with the training-data-file is left to you. Does that make the logic clearer?
@gcelano, I assume you have also checked the file DOCUMENTATION.md, right? So we'd clarify this there.
@LinguList Thanks! Now it's clearer to me 👍
Super. I will leave this issue open until I added the information in the update version 1.0 with the surprise data.
|
2025-04-01T04:35:29.905276
| 2021-03-31T14:05:20
|
846733271
|
{
"authors": [
"DannyBoyNg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10792",
"repo": "siisltd/Curiosity.SPSS",
"url": "https://github.com/siisltd/Curiosity.SPSS/issues/4"
}
|
gharchive/issue
|
Problem with time
I'm trying to read a SPSS file and all goes well until I need to read a Date variable with the format "hh:mm". I didn't try the other time formats yet.
When I do a "record.GetValue(variable)" it returns a number instead of a time. Same goes for "record[variable]".
For example in my SPSS file I have a time "12:30", when I try to read it, it returns the number "45000".
Please help me figure out what I'm doing wrong.
I figured it out. Just do var timespan = new TimeSpan(0, 0, value);
And then timespan.ToString(); to get the time.
|
2025-04-01T04:35:29.913413
| 2024-05-17T02:47:17
|
2301706555
|
{
"authors": [
"mihaitodor",
"sijms"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10793",
"repo": "sijms/go-ora",
"url": "https://github.com/sijms/go-ora/issues/557"
}
|
gharchive/issue
|
ORA-01483: invalid length for DATE or NUMBER bind variable
Hey @sijms, I bumped into a weird issue in https://github.com/benthosdev/benthos/issues/2574 where, given this table:
CREATE TABLE TEST (FOO NUMBER(1), BAR BLOB)
the following INSERT statement is successful:
INSERT INTO TEST(FOO, BAR) VALUES(:1, :2)
but this one fails with ORA-01483: invalid length for DATE or NUMBER bind variable:
INSERT INTO TEST(BAR, FOO) VALUES(:1, :2)
I'm guessing the binary blob is too big, which causes some serialisation issue.
Here's a self-contained example which reproduces the issue using<EMAIL_ADDRESS>and the gvenzl/oracle-free Docker container:
> docker run --rm -it -e ORACLE_PASSWORD=testpass -p1521:1521 gvenzl/oracle-free:slim-faststart
package main
import (
"database/sql"
"encoding/hex"
"log"
_ "github.com/sijms/go-ora/v2"
)
func main() {
db, err := sql.Open("oracle", "oracle://system:testpass@localhost:1521/FREEPDB1")
if err != nil {
log.Fatalf("Failed to connect to DB: %s", err)
}
defer db.Close()
_, err = db.Exec("CREATE TABLE TEST (FOO NUMBER(1), BAR BLOB)")
if err != nil {
log.Fatalf("Failed to create table: %s", err)
}
data, err := hex.DecodeString("TODO")
if err != nil {
log.Fatalf("Failed to decode image: %s", err)
}
_, err = db.Exec("INSERT INTO TEST(FOO, BAR) VALUES(:1, :2)", 1, data)
if err != nil {
log.Fatalf("Failed foo + bar: %s", err)
}
_, err = db.Exec("INSERT INTO TEST(BAR, FOO) VALUES(:1, :2)", data, 1)
if err != nil {
log.Fatalf("Failed bar + foo: %s", err)
}
}
Since GitHub doesn't want to let me dump a long hex string as plain text, please download the following image, then run xxd -plain image.png | tr -d '\n' to encode it to hex and replace the TODO in the code above with the output.
If you run the code above after the Docker container starts up, you should see the following message: Failed bar + foo: ORA-01483: invalid length for DATE or NUMBER bind variable.
PS: Thank you again for this library!
to insert BLOB into database:
1- if your data < 32Kb pass it as []byte which is equal to type RAW and database will convert RAW into BLOB
2- for larger data you should use go_ora.Blob{} data type
the correct code for insert:
// data contain large data object
_, err = db.Exec(`INSERT INTO TTB_557 (BAR, FOO) VALUES (:1 , :2)`, go_ora.Blob{Data: data}, 1)
if err != nil {
return err
}
Thank you for looking into it @sijms, your suggestion does make the error go away. In order to keep the SQL components generic in Benthos, preventing users from inserting payloads larger than 32Kb into blobs might be the best thing to do, so thanks for clarifying this limitation!
As a future enhancement to go-ora, I think it would be better for the library to reject the query if someone tries to pass in a []byte which exceeds 32Kb instead of succeeding in some cases such as db.Exec("INSERT INTO TEST(FOO, BAR) VALUES(:1, :2)", 1, data).
thanks @mihaitodor in next release I will return errors when input parameter for string and []byte exceed the max length
I will also test change data type to LongRaw and LongVarchar to increase size from 32KB to 1GB
in next release I add support for long input up to 1GB. now the driver detect input parameters (string, []byte) more than 32kb and send them as type LONG and LONGRAW and it will fit into LONG, LOB data types
you can test last commit also I add example/long_input and testing file long_Input_test
fixed in v2.8.19
|
2025-04-01T04:35:29.924947
| 2021-12-16T16:38:23
|
1082417293
|
{
"authors": [
"edalzell",
"flythecoopcom",
"rhoraczek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10794",
"repo": "silentzco/statamic-mailchimp",
"url": "https://github.com/silentzco/statamic-mailchimp/issues/68"
}
|
gharchive/issue
|
Lack of Marketing Permissions
Hi @edalzell,
Would it be possible to implement functionality to work with Mailchimp's marketing permissions? Although I really like the addon, I'm forced to use a custom implementation using spatie/laravel-newsletter due to the lack of this.
Thanks.
What permissions, do you have a link I could look at?
Of course, below are links to Mailchimp documentation and the implementation by Spatie.
https://mailchimp.com/help/collect-consent-with-gdpr-forms/
https://github.com/spatie/laravel-newsletter#working-with-gdrp-marketing-permissions
And what would you like to be able to do?
Closing due to lack of response.
Please re-open if still relevant.
Hi @edalzell,
also for me the possibility to manage the Mailchimp GDPR Marketing Preferences via your addon would be very interesting.
The permission to send marketing material is (unfortunately) essential in the European Union to be able to legally send newsletters. Mailchimp offers under "Settings/GDPR fields and settings" the possibility to define checkboxes and labels , which have to be confirmed in the signup form. Unfortunately, these form fields are not available via the usual merge tags but are rendered in HTML of the standard and embedded signup forms as follows:
`
Ja, ich bin mit der Speicherung und Verarbeitung meiner Daten einverstanden und habe die Datenschutzrichtlinien zur Kenntnis genommen.
`
Without this checkbox(es) I could't generate any registrations via your addon so far. Do you have a solution or tip how I can do this with my own Statamic form and your addon?
Thanks, Rudolf
Hi @rhoraczek, I don't really understand sorry.
Can you point me at the Mailchimp documentation about those fields?
Hi @edalzell,
here's the link to the documentation of the GDPR fields: https://mailchimp.com/help/collect-consent-with-gdpr-forms/#GDPR_fields
In the last few days I tried some tests on a newsletter signup via your Statamic addon and a standard "embedded signup form" from Mailchimp. Signups via the embedded form all went to Mailchimp without any problems and generated a "thank you" page and the required double opt-in emails to the recipients. Signups via the addon, on the other hand, had no further results (no opt-in emails and no "thank you" page).
Looking at the generated HTML code of my signup form, I noticed two differences.
the form tag of the addon looks like this:
.
That of the embedded form like this:
The checkbox I provided for the GDPR field in the Blueprint form looks like this in the addon:
My GDPR consent text
That of the embedded Mailchimp form, on the other hand, is like this:
My GDPR consent text
I would be happy if you could tell me what I can do and that my description explains my problem well enough.
Thanks in advance, Rudolf
No images in your comment above, can you please update?
Sorry, I thought I had inserted the source.
I will take a look at this next week, thanks for your patience
ok @rhoraczek this is not trivial, will get it wrapped up next week.
Thank you @edalzell, and sorry for the headache I'm giving you with my request.
ok @flythecoopcom @rhoraczek this is in 2.9, let me know if there are issues.
|
2025-04-01T04:35:29.933493
| 2021-06-30T03:13:09
|
933264944
|
{
"authors": [
"Nok74",
"albertxuzeng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10795",
"repo": "silicoin-network/silicoin-blockchain",
"url": "https://github.com/silicoin-network/silicoin-blockchain/issues/12"
}
|
gharchive/issue
|
[BUG]IT NOT WORK
File "chia\util\type_checking.py", line 72, in parse_item
File "chia\util\struct_stream.py", line 17, in new
ValueError: Value 1119447 of size 21 does not fit into uint16 of size 16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "chia\util\type_checking.py", line 75, in parse_item
File "chia\util\struct_stream.py", line 34, in from_bytes
TypeError: a bytes-like object is required, not 'int'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\kimti\Downloads\chia\chia-blockchain\venv\Lib\site-packages\chia\server\start_harvester.py", line 57, in
File "C:\Users\kimti\Downloads\chia\chia-blockchain\venv\Lib\site-packages\chia\server\start_harvester.py", line 52, in main
File "C:\Users\kimti\Downloads\chia\chia-blockchain\venv\Lib\site-packages\chia\server\start_harvester.py", line 26, in service_kwargs_for_harvester
File "", line 5, in init
File "chia\util\type_checking.py", line 93, in post_init
File "chia\util\type_checking.py", line 77, in parse_item
File "chia\util\struct_stream.py", line 36, in from_bytes
AssertionError
[19416] Failed to execute script start_harvester
Traceback (most recent call last):
File "C:\Users\kimti\Downloads\chia\chia-blockchain\venv\Lib\site-packages\chia\server\start_farmer.py", line 68, in
File "C:\Users\kimti\Downloads\chia\chia-blockchain\venv\Lib\site-packages\chia\server\start_farmer.py", line 63, in main
File "C:\Users\kimti\Downloads\chia\chia-blockchain\venv\Lib\site-packages\chia\server\start_farmer.py", line 38, in service_kwargs_for_farmer
File "chia\farmer\farmer.py", line 79, in init
File "chia\farmer\farmer.py", line 79, in
ValueError: non-hexadecimal number found in fromhex() arg at position 97
[6860] Failed to execute script start_farmer
Please provide the OS version, silicoin version and the following files:
~/.silicoin/config/config.yaml
~/.silicoin/log/debug.log
|
2025-04-01T04:35:29.973247
| 2016-08-23T14:47:16
|
172719333
|
{
"authors": [
"christianfrei",
"s3inlc",
"silvanheller"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10796",
"repo": "silvanheller/cineast",
"url": "https://github.com/silvanheller/cineast/issues/11"
}
|
gharchive/issue
|
Presentation: Midterms
[ ] Presentation
[ ] NN-Demo (Nuns and penguins)
[ ] Art from Data Demo
Do we plan to be able to do the Art Demo with already a basically working UI or do we plan to do it in IntelliJ?
I think it should be possible to do in the UI, but it mostly depends on Christian, what he thinks.
If there aren't big problems in the next days the UI should provide the basically functions. Maybe something isn't nice in css or isn't well displayed but the functionality should be there.
I would prefer to have a UI prototype so we can have a discussion about features at the presentation
Prototype is working for Art from Data with whole Movies and Shots; only issue at the moment is that shots are just displayed with its ID and not the Thumbnails
Art from Data is completely working
Finished.
|
2025-04-01T04:35:29.978966
| 2024-02-14T21:39:12
|
2135246521
|
{
"authors": [
"paletochen",
"zefhemel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10797",
"repo": "silverbulletmd/silverbullet",
"url": "https://github.com/silverbulletmd/silverbullet/issues/710"
}
|
gharchive/issue
|
Support multiple replacement pairs in replace() function
So that @paletochen doesn't have to nest them 10 levels deep.
Implementation of replace is here: https://github.com/silverbulletmd/silverbullet/blob/main/lib/builtin_query_functions.ts#L8
Are you considering also allowing regex in the second argument of the replacement function?
I still don't know what that would do. I don't know of any regex library that supports this. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace
I still don't know what that would do. I don't know of any regex library that supports this. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace
The idea would be to have replacements like this below, to avoid the need of making several pair-replacements
replace(readPage(name),/(\w+)(@)/,/\[\[$1\]\]/)
You can already do that, just use " quotes instead of / in the second argument. "[[$1]]"
Oh, I didn't realize that.
I just tried it and it works, it replaced the name@ correctly with [[name]] values.
However, these [[name]] results are not clickable.
Any chance to fix that?
|
2025-04-01T04:35:29.989701
| 2020-08-24T00:07:56
|
684282700
|
{
"authors": [
"sminnee",
"unclecheese"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10798",
"repo": "silverstripe/silverstripe-event-dispatcher",
"url": "https://github.com/silverstripe/silverstripe-event-dispatcher/issues/11"
}
|
gharchive/issue
|
Tag 0.1.0
Although there's no way that this module is ready for 1.0.0 (we should really get in the habit of holding off on a 1.0.0 tag until we're happy to support the API in question for an extended period), tagging 0.1.0 would allow for tidier inclusion of the experimental module into projects.
This has been done
This has been done
|
2025-04-01T04:35:29.992259
| 2016-02-22T11:03:46
|
135385582
|
{
"authors": [
"bummzack",
"tractorcow"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10799",
"repo": "silverstripe/silverstripe-framework",
"url": "https://github.com/silverstripe/silverstripe-framework/issues/5075"
}
|
gharchive/issue
|
i18nTextCollectorTask ignores template strings that contain a comment.
According to the documentation, translations within templates can be annotated with a comment like so:
<%t SearchResults.NoResult "There are no results matching your query." is "A message displayed to users when the search produces no results." %>
Source: https://docs.silverstripe.org/en/3.3/developer_guides/i18n/
The i18nTextCollectorTask silently ignores these translations though (eg. the translation is completely missing from the generated yml file).
This has been observed with 3.3.0-rc3 and 3.2.1
Duplicate of https://github.com/silverstripe/silverstripe-framework/issues/4965
I think it's worse than that... I've noticed textcollector putting the translations in completely incorrect locations (making new modules, in worst case example).
|
2025-04-01T04:35:30.003066
| 2017-01-01T04:23:41
|
198256225
|
{
"authors": [
"dhensby",
"robbieaverill",
"sminnee"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10800",
"repo": "silverstripe/silverstripe-framework",
"url": "https://github.com/silverstripe/silverstripe-framework/issues/6447"
}
|
gharchive/issue
|
RFC: Generic (PSR-3) output interface for BuildTasks and controllers to send output to CLI
Introduction
There are numerous places throughout the framework that you find things like echo "Processed {$someVar}\n"; - examples are in DatabaseAdmin::doBuild and most instances of BuildTask.
This RFC proposes to provide a common interface to use instead of direct output.
Why?
Because stdout output is not each to handle from anything using it externally. For example, if I'm writing a Symfony console application that triggers DatabaseAdmin::doBuild, the output from the build is going to be sent straight to stdout. I can buffer it, but then I won't see any results until the buffer is flushed (at the end).
Proposal
Provide a common method (probably from Object) which can be used instead of direct echo calls. This method would abstract any output to the CLI so that it can be handled in whatever way is required at the time.
Perhaps using a PSR-3 compatible interface might make sense since we have it in place already. In this instance, the Monolog StreamHandler could be configured to send messages to stdout. It would probably need a custom implementation to remove any additional information such as dates and times from the output message.
Quick example
I can imagine the logging.yml configuration getting a new CliLogger which is configured to return a new instance of Monolog with a configured StreamHandler pushed. In that case, the example method above might look like this:
# Class: Object
protected function output($message, $newline = true)
{
if ($newline) {
$message .= PHP_EOL;
}
Injector::inst()->get('CliLogger')->addInfo($message);
}
# File: logging.yml
CliLogger:
type: singleton
class: Monolog\Logger
constructor:
- "cli-log"
calls:
CliOutputHandler: [ pushHandler, [ %$CliOutputHandler ]]
CliOutputHandler:
class: Monolog\Handler\StreamHandler
constructor:
- 'php://stdout'
- 100 # Minimum log level. Would be better as a constant!
calls:
SetFormatter: [ setFormatter, [ %$CliOutputFormatter ]]
CliOutputFormatter:
class: Monolog\Formatter\LineFormatter
constructor:
- '%message%' # Specifies the format - no dates, etc
- null
- true # Allow inline line breaks
Using the above, an Object instance can call $this->output('Hello world') and have it written to stdout when running from the command line.
Other considerations
Add some kind of abstraction between SilverStripe's Object and Monolog's Logger - a CliLoggerService class maybe - sole purpose is to interface a method like output to addWarn
Add setter and getter for the logger service
This would allow me to define my own logger service if I didn't want those messages going to stdout by default.
Alternatively expose the CLI Logger instance with a getter, allowing people to configure the Logger without needing a service class in SilverStripe. The default configuration would be as provided in the example YAML above.
This seems like a good idea - we really should avoid using "echo" where we can.
I don't think Object is the right place to put the output method, perhaps it should be some dedicated global response/output interface...
I don't think Object is the right place to put the output method, perhaps it should be some dedicated global response/output interface...
Agreed. I wonder whether it is worth going so far as a PSR-7 interface, or whether a standalone class like Direct::output might be sufficient.
I think that noting PSR-3 here is incorrect, since that would add many levels that we probably don't need here.
I wonder if PSR-7 would be more suitable. That's probably overkill too though. We really just need to abstract out the output so I could do something like $this->getOutput()->write('This is the equivalent of "echo"') or ->error('This would write to stderr') with a getter and setter for the output interface.
I'd like to see the input ($request) abstracted out as well, but this would be a little different since BuildTask classes would be used to expecting it to be an HTTPRequest class.
Only changing the output would be B/C and extensible.
PSR-3/LoggerInterface added back in again to minimise any extra dependencies required for this to be achieved.
Core things:
interface Task {
/**
* @return array Map of available parameters and their descriptions
*/
function getParams();
function run($params, LoggerInterface $log);
}
Actually, this is a duplicate of #4425
@sminnee happy to close one?
I'm much more keen on the idea of a PSR3 log object being a parameter to BuildTask::run()
Also I think that as per #6524 we can unify BuildTask sand QueudJob
Yes, that makes sense since the objective of #6524 covers this RFC as well. We could perhaps even close this issue in favour of that.
Closing in favour of #6524
|
2025-04-01T04:35:30.006553
| 2019-08-19T22:41:14
|
482561428
|
{
"authors": [
"robbieaverill"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10801",
"repo": "silverstripe/silverstripe-mfa",
"url": "https://github.com/silverstripe/silverstripe-mfa/issues/326"
}
|
gharchive/issue
|
User without CMS access is not allowed to log in
CWP 2.4.x-dev
Steps to reproduce:
Log in as an admin
Create a user without assigning any groups
Log out
Log in as that user
Expected: (CWP 2.3.x-dev without MFA)
Actual: (CWP 2.4.x-dev with MFA)
It looks as though this is preventing the user from logging in at all.
Hmmm, I can't reproduce this any more... closing
|
2025-04-01T04:35:30.013469
| 2017-07-25T10:59:49
|
245364183
|
{
"authors": [
"andrewandante",
"dhensby",
"halkyon"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10802",
"repo": "silverstripeltd/frisp",
"url": "https://github.com/silverstripeltd/frisp/issues/25"
}
|
gharchive/issue
|
Dependency on now-outdated APC cache
https://github.com/silverstripeltd/frisp/blob/master/Frisp/Reader.php#L21-L28
Logs a notice if APC is not installed - which is the case with >PHP7. This means we can't read from the vault nicely (the notice obscures the actual return value) and deployments fail.
I've added APCU into our baseboxes to restore functionality, but this should be replaced with memcached or opcache probably.
APCu is still maintained, so should be an acceptable replacement.
OPcache seems to be purely for op code caching of php files, so can't be used as a replacement. memcached could be used but if you're thinking about various backends, then maybe implementing a PSR-6 cache library would be better..
Not a problem, as it uses APCu, it just looks like the caching library is using APC (without the U)
|
2025-04-01T04:35:30.015158
| 2016-04-05T21:14:48
|
146107623
|
{
"authors": [
"Gunny123",
"silvestreh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10803",
"repo": "silvestreh/atom-material-ui",
"url": "https://github.com/silvestreh/atom-material-ui/issues/269"
}
|
gharchive/issue
|
Add folder animations to the theme
Would there be a way to implement the folder animations like in the Sublime Text Material UI theme?
Apologies for the number of issue submissions already.
That would involve changing the icons and it's something I'm not going to do until I find a way for not having blurry icons. Check #193 for some more info.
Understood. Thanks for the reply.
|
2025-04-01T04:35:30.018214
| 2016-12-21T12:52:06
|
196925687
|
{
"authors": [
"alexbjorling",
"t20100"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10804",
"repo": "silx-kit/silx",
"url": "https://github.com/silx-kit/silx/issues/478"
}
|
gharchive/issue
|
addImage with negative scaling breaks the mask tool
Hi,
In a Plot2D widget, you can use negative values for scale when calling addImage(), but then the MaskToolsWidget doesn't work (as the following code shows). If scale is assumed to be positive, then addImage() should raise an error or a warning if otherwise.
import PyQt4
import silx.gui.plot
import numpy as np
app = PyQt4.QtGui.QApplication([])
im = np.arange(100).reshape((10, 10))
plot = silx.gui.plot.Plot2D()
plot.addImage(im, scale=(-1, 1))
plot.show()
app.exec_()
Thanks,
Alex
scale can be negative in the Plot, but it was not tested for the mask.
I'll look at it.
Ideally it should be supported, and if the mask do not support it, it should be disabled for such images.
|
2025-04-01T04:35:30.021568
| 2023-07-06T15:18:17
|
1791766642
|
{
"authors": [
"kif",
"t20100"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10805",
"repo": "silx-kit/silx",
"url": "https://github.com/silx-kit/silx/pull/3900"
}
|
gharchive/pull-request
|
[Draft] Opencl lz4
Changelog: LZ4 compressor on GPU
[ ] OpenCL code targeted to run on GPU
[X] Find position of interest
[X] Create segment descriptor (position in input, length of litterals and matches)
[x] Concatenate several descriptor (between passes, in the case there are only litterals)
[x] Write compressed stream, including tokens, offsets, overflows in litteral and matches
[x] Manage the beginning of stream (4 bytes with uncompressed buffer size)
[x] Manage end of stream (cannot end with matches, has to finish with litterals
[x] Concatenate output stream to make it contiguous (1 or 2 kernel options ...)
[x] Python code to call the OpenCL code
[ ] Non regression test
[ ] Profiling and performance assessment
I've added a commit to fix the conflict introduced by #3991.
A rebase would be best in order to sparse the merge of main in this branch.
|
2025-04-01T04:35:30.026082
| 2023-12-23T20:39:41
|
2054898323
|
{
"authors": [
"simatec",
"winampdevil"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10806",
"repo": "simatec/ioBroker.solax",
"url": "https://github.com/simatec/ioBroker.solax/issues/186"
}
|
gharchive/issue
|
additional data for Solax X3 Hybrid G3, when using a second inverter and a second meter
hello,
i'm using a second inverter with a second DTSU-666-meter.
The solax-adapter in cloud-mode report the power of this inverter as feedinpowerM2 (see log)
solaxRequest: {"success":true,"exception":"Query success!","result":{"inverterSN":"xXxXxXxXxXxX","sn":"xXxXxXxXxXxX","yieldtoday":0.7,"yieldtotal":5160.7,"feedinpower":-564,"feedinenergy":5579.32,"consumeenergy":1942.26,"feedinpowerM2":-8,"soc":10,"inverterType":"5","inverterStatus":"109","uploadTime":"2023-12-23 21:27:52","batStatus":"0"},"code":0}
In local-mode, this data is at position 100 (value: -8, see log). The unit is "Watt".
(The value is negative, because this is the standby-power of the inverter at night.)
{"type":"X3-Hybiyd-G3","SN":"xXxXxXxXxXxX","ver":"2.033.20","Data":[0,0,0,0,0,0,0,0,0.7,5160.7,-571,0,0,0,0,0,23,0.9,0,1385,0,10,216,174,25,25,17,16,3.1,3,165,1,0,0,0,0,0,0,0,0,0,5579.32,1942.18,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,9,1,1,1,900,100,10,20,25,0,0,0,0,443.4,1818.2,5568.3,0,0,2,0,0,0,0,0,0,0,0,0,1,1,1,-8,1185.49,17.29,1,273,190.6,0,0],"Information":[6,5,"X3-Hybiyd-G3","xXxXxXxXxXxX",1,4.71,0,4.53,1.07],"battery":{"brand":"1","masterVer":"5.06","slaveNum":"4","slaveVer":[0,0,0,0,0,0,0,0],"slaveType":[0,0,0,0,0,0,0,0]}}
Is it possible, to integrate this value in a future version of the adapter?
Thank you
Frank
Fix in the next Version
|
2025-04-01T04:35:30.343329
| 2022-12-14T20:52:21
|
1497373728
|
{
"authors": [
"crisperit",
"simolus3"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10807",
"repo": "simolus3/drift",
"url": "https://github.com/simolus3/drift/issues/2206"
}
|
gharchive/issue
|
Flutter web app - clearing all rows on refresh (F5)
Hello,
I've tried to use drift web implementation in my app by following
https://github.com/simolus3/drift/tree/develop/examples/flutter_web_worker_example
It works fine - until I'm trying to refresh the page - then all data disappears (?)
You can see that on following record from mine test page
https://prappser-app.netlify.app/#/
Flutter version:
Flutter 3.3.4 • channel stable • https://github.com/flutter/flutter.git
Framework • revision eb6d86ee27 (2 months ago) • 2022-10-04 22:31:45 -0700
Engine • revision c08d7d5efc
Tools • Dart 2.18.2 • DevTools 2.15.0
Drift version: 2.3.0
database.dart
@DriftDatabase(tables: [ApplicationEntity])
class Database extends _$Database {
Database(QueryExecutor queryExecutor) : super(queryExecutor);
@override
int get schemaVersion => 1;
@override
MigrationStrategy get migration {
return MigrationStrategy(
onCreate: (migrator) async {
await migrator.createAll();
},
);
}
}
database_web.dart
import 'dart:html';
import 'package:drift/drift.dart';
import 'package:drift/remote.dart';
import 'package:drift/web.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
import 'database.dart';
final databaseProvider = Provider<Database>((ref) {
final webDatabase = LazyDatabase(() async {
final worker = SharedWorker(
kReleaseMode ? 'worker.dart.min.js' : 'worker.dart.js', 'worker');
var conn = remote(worker.port!.channel());
return conn.executor;
});
return Database(webDatabase);
});
web/worker.dart
void main() {
final self = SharedWorkerGlobalScope.instance;
self.importScripts('sql-wasm.js');
final db = WebDatabase.withStorage(DriftWebStorage.indexedDb('worker',
migrateFromLocalStorage: false, inWebWorker: true));
final server = DriftServer(DatabaseConnection(db));
self.onConnect.listen((event) {
final msg = event as MessageEvent;
server.serve(msg.ports.first.channel());
});
}
Let me know if you have any ideas on what direction should I check it and what more data should I provide.
Cheers :)
From what I can see - no matter how many queries I perform - they do not seem to be saved in indexeddb - the blob size is not changing at all 🤔
I can reproduce the issue in your app, but not with the flutter_web_worker_example included in this repository. Do you have a way to share those sources with me? The worker looks very similar what I'm doing.
All sources can be seen through devtools on https://prappser-app.netlify.app/#/
@simolus3 Btw - on Chrome I do not see worker.dart.js logs - but they're visible on Mozilla Firefox. Do you know why?
Ah - nvm
chrome://inspect/#workers
@simolus3
I've found the issue.. after being able to inspect worker logs
The issue is that
.into(database.applicationEntity)
.insertReturning(...)
...is invalidly treated as a SELECT query
After changing it into...
.into(database.applicationEntity)
.insert(...)
...the method is resolved correctly as INSERT and the data is persisted
I've deployed the fixed version in here:
https://prappser-app-fix.netlify.app/#/
@simolus3 I think that it might be called a bug IMO
Thanks for the in-depth analysis!! You're absolutely right, this is a bug. I have fixed the WebDatabase to also emit a write for RETURNING queries in 4644bce9dddc6298272cbd04fe27ffa23a240d9c.
We need to run the query as select because that's the only way we get the rows back. Of course, we then have the problem that the database doesn't know it needs to be saved (which we have to do explicitly with sql.js).
Nice!
Waiting for the release then :D
|
2025-04-01T04:35:30.346071
| 2021-05-20T22:36:40
|
897520873
|
{
"authors": [
"simolus3",
"stx"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10808",
"repo": "simolus3/moor",
"url": "https://github.com/simolus3/moor/issues/1211"
}
|
gharchive/issue
|
Serialization broken with double? type
_DefaultValueSerializer code:
if (_typeList is List<double> && json is int) {
return json.toDouble() as T;
}
This doesn't catch List<double?> and thus double? columns, like RealColumn? get myColumn => real().nullable()();
This exception will be thrown on generated code serializer.fromJson<double?>(json['my_column']) when json['my_column'] is an int:
_CastError (type 'int' is not a subtype of type 'double?' in type cast)
I believe the fix is:
if (_typeList is List<double?> && json is int) {
return json.toDouble() as T;
}
Thanks for the report, I've fixed this on develop.
|
2025-04-01T04:35:30.347928
| 2018-02-24T23:59:59
|
299983729
|
{
"authors": [
"simon-kyger"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10809",
"repo": "simon-kyger/brawlhallacrewbattle",
"url": "https://github.com/simon-kyger/brawlhallacrewbattle/issues/15"
}
|
gharchive/issue
|
footer was removed from the playgamepage need to render it again
this will be a bitch... of course :(
fixed with https://github.com/simon-kyger/brawlhallacrewbattle/commit/235a9bb9614e0b207adb2b716cf393b14813c382
|
2025-04-01T04:35:30.358706
| 2021-09-30T16:05:24
|
1012360556
|
{
"authors": [
"grimdeathr",
"marcocastignoli",
"vinny-888",
"yaserOSource"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10810",
"repo": "simondevyoutube/ProceduralTerrain_Part10",
"url": "https://github.com/simondevyoutube/ProceduralTerrain_Part10/issues/1"
}
|
gharchive/issue
|
do you have a more recent version of this project?
congrats for this series. It's amazing!
getting a sharedArrayBuffer undefined error
Looks like it is caused by Chrome as a security measure
https://stackoverflow.com/a/65675390
I was able to get this working locally with npm http-server by adding these lines to the npm module code manually
https://github.com/http-party/http-server/pull/759/files#diff-198646ab9cb448a53a3690ad6df53d6afa6c2402a2500b22c1f7abf2994346adR114-R115
@vinny-888
Was this all you changed?
if (options.isolate) { this.headers['Cross-Origin-Embedder-Policy'] = 'require-corp'; this.headers['Cross-Origin-Opener-Policy'] = 'same-origin'; }
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.