added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:40:22.393367
2020-12-01T02:47:31
753958095
{ "authors": [ "AbderrazzakB", "CanTheAlmighty", "nandox5", "nichita-pasecinic" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10595", "repo": "sequelize/sequelize-typescript", "url": "https://github.com/sequelize/sequelize-typescript/issues/858" }
gharchive/issue
Model.create() Values autocomplete is missing? Am i missing something or is there no autocomplete on new Model({...}) ,Mode.create({...}) or Model.build({...})... I added the columns to my Models as shown in the documentation, but when i go to create a Model and i start typing the Class members, they do not auto complete.. Shouldn't it work like this? This guy did the sequelize tyescript integration manually and it seems like a lot of dirty work to make it work. If this is not something integrated with this package, would it be hard to implement so that we get auto complete? It's a bit awkward, since a while ago sequelize-typescript offered said completions (0.6 I think), then they removed when Sequelize@5 added official Typescript support (but without completions) and now Sequelize@6 has full completions (using a hidden property called _attributes). So currently, the maintainer of this library seems to be inactive, and you get better support of types sticking to the official (ironically). I had the same issue, and I solved it by creating a type for my model maybe you need two one for TModelAttributes and another for TCreationAttributes, and then do like this (I forgot how it calls in typescript 😅). then the completion works fine. type User = { id: number; username: string; //... } type CUser = Omit<User, 'id'> class User extends Model<User, CUser> { @Column //.. } @AbderrazzakB That's pretty much how sequelize@6 recommends it, and it properly fills the _attributes underlying property that feeds the autocompletion. The only issue you'll run is when attempting to do relations, where the, for example, @BelongsToMany(() => Class) does not forward the _attributes information, and causes Typescript to throw a compilation error. For now, I think the only solution is that, and cast to any to cover it up. For Example: interface UserAttributes { id: string name: string } type UserCreationAttributes = Omit<UserAttributes, 'id'> // id is auto-generated. @Table({ ... }) class User extends Model<UserAttributes, UserCreationAttributes> { // <-- we add the type info for Sequelize@6. @Column({ type: DataTypes.UUIDV4 }) id!: string @Column({ type: DataTypes.TEXT, allowNull: true }) name!: string | null // Here we cast to any so seq-typescript doesnt cry @BelongsToMany(() => Post as any, () => UserPost as any) readonly posts?: Array<Post> } You can check the types work in VSCode by doing User['_attributes']['<autocomplete here>'] @AbderrazzakB That's pretty much how sequelize@6 recommends it, and it properly fills the _attributes underlying property that feeds the autocompletion. The only issue you'll run is when attempting to do relations, where the, for example, @BelongsToMany(() => Class) does not forward the _attributes information, and causes Typescript to throw a compilation error. For now, I think the only solution is that, and cast to any to cover it up. For Example: interface UserAttributes { id: string name: string } type UserCreationAttributes = Omit<UserAttributes, 'id'> // id is auto-generated. @Table({ ... }) class User extends Model<UserAttributes, UserCreationAttributes> { // <-- we add the type info for Sequelize@6. @Column({ type: DataTypes.UUIDV4 }) id!: string @Column({ type: DataTypes.TEXT, allowNull: true }) name!: string | null // Here we cast to any so seq-typescript doesnt cry @BelongsToMany(() => Post as any, () => UserPost as any) readonly posts?: Array<Post> } You can check the types work in VSCode by doing User['_attributes']['<autocomplete here>'] No way that the lib that calls itself *-typescript does not provide proper types for most basic Model methods. That's sad!
2025-04-01T06:40:22.400300
2015-11-19T13:25:08
117813071
{ "authors": [ "jenkins-sequenceiq", "martonsereg" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10596", "repo": "sequenceiq/cloudbreak", "url": "https://github.com/sequenceiq/cloudbreak/pull/1134" }
gharchive/pull-request
CLOUD-48384 new sshj version fixes ecdsa fingerprint issue @schfeca75 sshj 0.15.0-rc.1 is manually built and uploaded to maven.sequenceiq.com. It will be used until the 0.15.0 sshj release is out. Jenkins build finished, all tests passed. Refer to this link for build results: http://ci.sequenceiq.com/job/cloudbreak-pull-request/1834/
2025-04-01T06:40:22.406509
2018-06-12T04:42:32
331435006
{ "authors": [ "katzchang", "seratch" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10597", "repo": "seratch/bigquery4s", "url": "https://github.com/seratch/bigquery4s/pull/9" }
gharchive/pull-request
add useStandardSql option Added: useLegacySql config parameter wrapper method for use-standard-sql query It works well on my machine. I did not add example spec, because existing UsageExampleSpec can not run on my environment. Readme describes Click on APIS & AUTH on the left, then select Credentials. Click on "Create new Client ID.", but I can not find on https://console.developers.google.com/iam-admin/iam/project?project=<MY_PROJECT> ... On my machine, I wrote spec with the builder fromServiceAccount and it runs well. Thanks, looks good to me. Regarding the unit test, I will check later. Also, I will release a new version today. As per the unit test issue, Google's admin pages have been changed a lot. I've updated the README. I just released version 0.8 including your change. Thank you for your contribution 🎉
2025-04-01T06:40:22.410694
2022-05-03T10:33:50
1223927354
{ "authors": [ "dtolnay", "gthb" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10598", "repo": "serde-rs/json", "url": "https://github.com/serde-rs/json/issues/884" }
gharchive/issue
Missing Github release entry for v1.0.80 The v1.0.80 release does not yet have an entry in https://github.com/serde-rs/json/releases, just a tag in https://github.com/serde-rs/json/tags. This causes Dependabot PRs to be missing the release notes for v1.0.80, as they source release notes from Github releases: Fixed!
2025-04-01T06:40:22.414618
2023-06-29T06:00:43
1780185737
{ "authors": [ "NikhilPatel0019", "sereneinserenade" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10599", "repo": "sereneinserenade/tiptap-comment-extension-react", "url": "https://github.com/sereneinserenade/tiptap-comment-extension-react/issues/1" }
gharchive/issue
Issue: Overlapping Comments Not Functioning Properly The issue I am facing is as follows: When I select a portion of text and add a comment to it, such as hello, the comment is added successfully. However, when I try to add a comment to another portion of text that includes the previously added comment, like another hello content, the comment associated with the "hello" text is removed. It appears that the span tag enclosing the "hello" text is being removed when adding the comment to the newly selected text. Steps to reproduce: Let's say we have the paragraph below: Bali is predominantly a Hindu country. Bali is known for its elaborate, traditional dancing. The dancing is inspired by Hindi beliefs. Most of the dancing portrays tales of good versus evil. Watching the dancing is a breathtaking experience. Lombok has some impressive points of interest – the majestic Gunung Rinjani is an active volcano. It is the second-highest peak in Indonesia. Art is a Balinese passion. Batik paintings and carved statues make popular souvenirs. No I select the text traditional and add a comment, this comment will be visible. Now I select the text it's elaborate, traditional dancing and add a comment to it, this comment will be added too but commented added over the text traditional will be removed. Possible solution: Need to figure out how to persist the span on already commented text when the given commented text is inside another text where the comment needs to be added. renderHTML({ HTMLAttributes }) { return [ 'span', mergeAttributes(this.options.HTMLAttributes, HTMLAttributes), 0, ] } You're looking for https://prosemirror.net/docs/ref/#model.MarkSpec.excludes
2025-04-01T06:40:22.422379
2024-03-29T15:38:43
2215589177
{ "authors": [ "newbiemate", "serengil" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10600", "repo": "serengil/deepface", "url": "https://github.com/serengil/deepface/issues/1154" }
gharchive/issue
Help with searching sql with Euclidean distance I was going over your article on face detection and db here: https://sefiks.com/2021/02/06/deep-face-recognition-with-sql/ It's a great article but I'm having trouble getting the same results. I have a group photo of Angelina Jolie and I've created the face embeds using RetinaFace and stored them into the sqlite db like the article states: with conn: cur = conn.cursor() instances = [] img_name = "jolie_group.jpg" # Build the embeddings faces = DeepFace.represent(img_path=img_name, model_name="Facenet", detector_backend="retinaface") for index,face in enumerate(faces): embeddings = face["embedding"] print(f"length = {len(embeddings)}") print(embeddings) insert_statement = "INSERT INTO face_meta (ID, IMG_NAME, EMBEDDING) VALUES (?, ?, ?)" insert_args = (index, img_name, np.array(embeddings).tobytes()) cur.execute(insert_statement, insert_args) for i, embedding in enumerate(embeddings): insert_statement = "INSERT INTO face_embeddings (FACE_ID, DIMENSION, VALUE) VALUES (?, ?, ?)" insert_args = (index, i, str(embedding)) cur.execute(insert_statement, insert_args) conn.commit() The result will be 9 rows into the face_meta table, and 1152 rows for the face_embeddings table (each face embed has 128 dimensions). So this is great! The only face it didn't detect was the one looking sideways, but that's fine. I have a single picture of Jolie that I want to match against the DB, and I would like to get the image name and the related face embedding in the group photo. Using The Server-Side Solution: with conn: cur = conn.cursor() # compare target_img = "angie_single.jpg" target_represent = DeepFace.represent(img_path=target_img, model_name="Facenet", detector_backend="retinaface")[0] target_embedding = target_represent["embedding"] target_facial_area = target_represent["facial_area"] target_statement = "" for i, value in enumerate(target_embedding): target_statement += 'select %d as dimension, %s as value' % (i, str(value)) #sqlite if i < len(target_embedding) - 1: target_statement += ' union all ' select_statement = f''' select * from ( select img_name, sum(subtract_dims) as distance_squared from ( select img_name, (source - target) * (source - target) as subtract_dims from ( select meta.img_name, emb.value as source, target.value as target from face_meta meta left join face_embeddings emb on meta.id = emb.face_id left join ( {target_statement} ) target on emb.dimension = target.dimension ) ) group by img_name ) where distance_squared < 100 order by distance_squared asc ''' results = cur.execute(select_statement) instances = [] for result in results: print(result) img_name = result[0] distance_squared = result[1] instance = [] instance.append(img_name) instance.append(math.sqrt(distance_squared)) instances.append(instance) result_df = pd.DataFrame(instances, columns = ['img_name', 'distance']) print(result_df) However this returns nothing: Empty DataFrame Columns: [img_name, distance] Index: [] I've tried this with other single photos of Angelina Jolie but it's unable to find any matches. I think I understand the theory going on, using Euclidean distance calculation to find nearest matches and having a cutoff threshold, but looking at the code I'm unsure where it went wrong. Could you help me with what I'm missing? Would you please call deepface's verify function and send single photo of angelina as img1_path and group photo as img2_path? This will compare the most similar one in the group photo with angelina's single photo. Even if the verified key is false, you can use distance value to use a tuned threshold. Finally, none of ml models is perfect. Misclassification for an instance is not a problem with respect to the model. Hi Serengil, I was able to verify the face: {'verified': True, 'distance': 0.22807926891881514, 'threshold': 0.4, 'model': 'Facenet', 'detector_backend': 'retinaface', 'similarity_metric': 'cosine', 'facial_areas': {'img1': {'x': 241, 'y': 76, 'w': 195, 'h': 283, 'left_eye': (386, 192), 'right_eye': (294, 193)}, 'img2': {'x': 558, 'y': 34, 'w': 63, 'h': 83, 'left_eye': (604, 67), 'right_eye': (575, 71)}}, 'time': 10.1} It looks like it does recognize Angelina Jolie here ^^ I also tried the Client Side Solution, and curiously enough that returned a match: img_name embedding target distance 0 angie_group.jpg [0.10850527882575989, 0.5568691492080688, 0.81... [-0.6434235572814941, 0.5883399248123169, 0.29... 8.263514 Do you know if the Server Side Solution needs an update to the code? I think maybe the sql there isn't working since these other solutions are working fine.
2025-04-01T06:40:22.457122
2017-10-16T10:40:57
265724086
{ "authors": [ "carlosspohr", "offstate", "serge-rider" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10601", "repo": "serge-rider/dbeaver", "url": "https://github.com/serge-rider/dbeaver/issues/2346" }
gharchive/issue
Error on Calculate total row count After a query execution I press the button Calculate total row count and receive an error "Error executing row count" (Yes, the query is big and complex :) log.txt Hi @offstate, I tested it here and had not problem on it. I think may your complex query is breaking some kind of parser of dbeaver. Could you post this query ? WITH rival as ( SELECT o.product_id as id,p.rgroup_id, pr.id prov_id, prc.price, prc.ts::date from rival.price prc join rival.offer o on prc.offer_id=o.id join rival.provider pr on pr.id=prc.provider_id left join product.product_summary p on p.id=o.product_id where prc.provider_id in (1,4) --and p.rgroup_id in (2620) ), our as ( select gp.id,p.rgroup_id, 0 prov_id, pr.price, null::date as ts from goods.product gp join rival.price_mx pr on gp.id=pr.id left join product.product_summary p on p.id=gp.id --where p.rgroup_id in (2620) ), prods as ( select * from rival union all select * from our ), oems AS ( select distinct pr.id, p.rgroup_id, case when genuine is null then null else (array_remove(r.rel, pr.id))[1] end rel_id, b.id as brand_id_gen,prov_id, price, pr.ts, null as note FROM prods pr left join product.product_rel r on r.rel @> array[pr.id] and r.rate>=3 --(in_rate is null or r.rate>=in_rate) left JOIN product.product_summary p ON p.id != pr.id and p.id=any(r.rel) and p.rgroup_id=pr.rgroup_id left JOIN product.brand b ON b.id = p.brand_id AND genuine = TRUE ), orv as ( select o1.id,array_agg(o1.rel_id) rel_ids, max(o1.rgroup_id) rgroup_id,max(o1.brand_id_gen) brand_id_gen from (select id,rel_id, rgroup_id,brand_id_gen from oems where prov_id>0 and rel_id is not null) o1 left join (select id,rel_id, rgroup_id from oems where prov_id=0) o2 on o1.rel_id=o2.rel_id where o2.id is null group by o1.id ), oems2 as ( select distinct our.id, our.rgroup_id, unnest(orv.rel_ids),orv.brand_id_gen, our.prov_id, our.price, our.ts, 'generated'::text from orv join product.product_rel r on r.rel @>array[orv.id] and rate>=3 join our on our.id=any(r.rel) and our.id != orv.id and our.rgroup_id=orv.rgroup_id ), oems3 as ( select * from oems where rel_id is not null union select * from oems2 ), uq_oems as ( select rel_id from (select distinct prov_id, rel_id from oems3) t group by rel_id having count(rel_id)=1 ), totals as ( select rgroup_id,brand_id_gen, count(distinct o.id) as co_cross, count(distinct o.rel_id) as co_gen from oems3 o group by rgroup_id,brand_id_gen ) select 1 some_columns from oems3 o JOIN product.product_summary ps on ps.id=o.rel_id JOIN product.product_summary psc on psc.id=o.id left join uq_oems uq on o.rel_id=uq.rel_id join totals tot on tot.rgroup_id=o.rgroup_id and tot.brand_id_gen=o.brand_id_gen group by ps.rgroup_name, ps.brand_name,ps.brand_id, ps.rgroup_id, prov_id,tot.co_gen,tot.co_cross order by ps.rgroup_id, ps.brand_id Wow. DBeaver uses JSQLParser for queries modification. This library doesn't support some database-specific features. I'm afraid I can't fix this issue right now. You can select count(*) from ( <origin_query>) t without any parsing Yep @offstate, I forgot to post this snippet to make a simple count over any sql. I'm not sure about of all others databases supported, but this one will work as expected. Agreed. We'll use nested select query if direct query transformation will fail (not that some databases do not support or limit functionality of nested selects - so we can't use this approach as default one). I got strange result on a simple query. After I fetched all rows, row count field show 3117 rows, but after I pressed the refresh button it show 65! I got an error for the following query select * from dlvr2.route where info ?| array['quick','optimal','cheap'] log.txt @offstate Was this resolved in 4.2.5? In any case you can see actual COUNT query in query manager view. Is it correct? In regards to query wrapping (which is now used by default by DBeaver). It doesn't work in SQL Server/Sybase if original query has unnamed columns (constants and expressions without explicit column name). For instance SELECT 123. I can see the count query and it is correct, but DBeaver cannot execute it. First I execute the original query next I pressed the "Calculate ..." button an got the error and then I copied the COUNT query from the log and it done successfully. This is because of question mark in the query. DBeaver thinks it is a parameter binding. You can disable anonymous parameters at all or redefine the mark in SQL Editor preferences: The option "Anonymois SQL parameters" is not checked and when I execute the query, parameter request window doesn't appear. Moreover, the error occurs regardless of which character I set there. Interesting. I can't reproduce this. What is your database and client driver? Postgres 9.5, debian client xubuntu
2025-04-01T06:40:22.458792
2016-11-16T12:23:08
189687880
{ "authors": [ "GiantCrocodile", "serge-rider" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10602", "repo": "serge-rider/dbeaver", "url": "https://github.com/serge-rider/dbeaver/issues/977" }
gharchive/issue
Add option to show views in ER-diagram Like already mentioned in #934 it is not possible to see views in ER-diagram because they don't have any relationships and so they won't make much sense in a ER-diagram. I just have a need for this now because I'm using ER-diagram also to get a overview of DB (not only for relations) and it would make it easier to get a overview over information_schema. Furthermore I'm in need to show a complete visual model of my database to someone. This is why I need also my views included. Already implemented. Will be in 3.8
2025-04-01T06:40:22.466072
2018-11-15T23:10:45
381378843
{ "authors": [ "dxm447", "jeanlaroche", "jeffwitz", "serge-sans-paille" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10603", "repo": "serge-sans-paille/pythran", "url": "https://github.com/serge-sans-paille/pythran/issues/1095" }
gharchive/issue
Pythran does not recognise fft2 %%pythran #pythran export normalized_correlation(float64[:,:],float64[:,:],float64) #pythran export normalized_correlation(float32[:,:],float32[:,:],float32) import numpy as np def normalized_correlation(im1,im2): im1_fft = np.fft.fft2(im1) im2_fft = np.fft.fft2(im2) im_mult = np.multiply(im1_fft,np.conj(im2_fft)) return np.fft.ifft2(im_mult) Gives the error PythranSyntaxError: Attribute 'fft2' unknown Indeed, np.fft.ifft2 has not been implemented at this point... rfft and irfft are though... @jeanlaroche thanks. Can I use general complex fft for 1d arrays? So far, pythran only has np.fft.rfft and np.fft.irfft, i.e. 1D ffts with real inputs. @jeanlaroche I'll happily review your patch for complex number support and/or 2D support :-) You did a great job with irfft and rfft @jeanlaroche I'll happily review your patch for complex number support and/or 2D support :-) You did a great job with irfft and rfft ah ah ah ah! Flattery will get you nowhere! :D Numpy RFFT is actually good enough for now :) I am attaching two working code snippets for working with 1D FFT in Pythran, which I have been using in my own project in the past two-three days #pythran export pythran_fft1D(float32[]) #pythran export pythran_fft1D(float64[]) #pythran export pythran_fft1D(complex64[]) #pythran export pythran_fft1D(complex128[]) import numpy as np def pythran_fft1D(input_array): number_points = np.int64(len(input_array)) number_rfft_even = np.int64((0.5*number_points) + 1) number_rfft_odd = np.int64((number_points + 1)/2) if (number_points % 2 == 0): number_rfft = number_rfft_even else: number_rfft = number_rfft_odd real_part = np.real(input_array) imag_part = np.imag(input_array) real_rfft = np.fft.rfft(real_part) imag_rfft = np.fft.rfft(imag_part) output_fft_real = np.zeros(number_points) output_fft_imag = np.zeros(number_points) output_fft_real[0:number_rfft] = real_rfft output_fft_imag[0:number_rfft] = imag_rfft conj_real_rfft = np.conjugate(real_rfft) conj_imag_rfft = np.conjugate(imag_rfft) for ii in range(number_rfft,number_points): output_fft_real[ii] = conj_real_rfft[(number_points - (ii))] output_fft_imag[ii] = conj_imag_rfft[(number_points - (ii))] output_fft = output_fft_real + ((1j)*output_fft_imag) return output_fft and #pythran export pythran_ifft1D(float32[]) #pythran export pythran_ifft1D(float64[]) #pythran export pythran_ifft1D(complex64[]) #pythran export pythran_ifft1D(complex128[]) import numpy as np def pythran_ifft1D(input_array): number_points = np.int64(len(input_array)) conj_array = np.conjugate(input_array) number_rfft_even = np.int64((0.5*number_points) + 1) number_rfft_odd = np.int64((number_points + 1)/2) if (number_points % 2 == 0): number_rfft = number_rfft_even else: number_rfft = number_rfft_odd real_part = np.real(conj_array) imag_part = np.imag(conj_array) real_rfft = np.fft.rfft(real_part) imag_rfft = np.fft.rfft(imag_part) output_fft_real = np.zeros(number_points) output_fft_imag = np.zeros(number_points) output_fft_real[0:number_rfft] = real_rfft output_fft_imag[0:number_rfft] = imag_rfft conj_real_rfft = np.conjugate(real_rfft) conj_imag_rfft = np.conjugate(imag_rfft) for ii in range(number_rfft,number_points): output_fft_real[ii] = conj_real_rfft[(number_points - (ii))] output_fft_imag[ii] = conj_imag_rfft[(number_points - (ii))] first_fft = output_fft_real + ((1j)*output_fft_imag) second_conj = np.conjugate(first_fft) output_ifft = second_conj/number_points return output_ifft Both work for real and complex inputs with identical results and times to numpy.fft.fft. If you think it's a good idea I can add the 2D support too and submit it. Both work for real and complex inputs with identical results and times to numpy.fft.fft. If you think it's a good idea I can add the 2D support too and submit it. This is a good idea! Please submit, even in Python form as a followup of this issue This should be fixed with #1134, at least it now compiles o/ for fft2, normally performing two FFT works well, but it is efficient on big data, that's why there are a lot of hacks in FFTW in order to deal differently with the different sizes of matrix or vectors. FFT_image=: fft(fft(Image).T).T
2025-04-01T06:40:22.485623
2016-10-03T13:07:17
180635852
{ "authors": [ "petrformanek", "pveller", "sergeyshushlyapin" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10604", "repo": "sergeyshushlyapin/Sitecore.FakeDb", "url": "https://github.com/sergeyshushlyapin/Sitecore.FakeDb/issues/146" }
gharchive/issue
Cannot add item to PublishQueue FakeDb doesn't implement working with PublishQueue. Calling PublishManager.AddToPublishQueue do just nothing (FakeDb provides just empty implementation). The same is for PublishManager.GetPublishQueue. Can you implement working with PublishQueue? I was trying to implement my own DataProvider which inherited from FakeDataProvider but I was unable to hook it to sitecore (datProviders sections in App.config is ignored and FakeDataProvider is always used). Found in version 1.2.2. Thanks Petr I think you will hit a larger wall with testing your own Data Provider with FakeDb until we migrate to command-less approach to faking. Some of it is explained in #138 (at the bottom of the thread). FakeDb right now intercepts at the database command level so not many things will reach the data provider chain as you would normally expect in a the real Sitecore instance. @sergeyshushlyapin has been working on a Data Provider based implementation that will likely play nicely with the chaining that you would expect out of the box but you will have to be careful not to chain yourself in front of FakeDB's one. That said, can you please provide more details about your publishing queue scenario? What exactly are you trying to test with FakeDb that requires the queue working? We have a quite large sitecore deployment and have one let's say master sitecore instance and several regional sitecore instances. On master instance we regularly read changed items from PublishQueue and we push these items to regional instances. If push fails then item is updated in PublishQueue to re-try it in the next round. We call just PublishManager.AddToPublishQueue and PublishManager.GetPublishQueue. I was able to make it work with following hack: _db = new Db(); _sourceDb = _db.Database; var dataProvidersField = _sourceDb.GetType().GetField("_dataProviders", BindingFlags.Instance | BindingFlags.NonPublic); var dataProvidersCollection = (DataProviderCollection)dataProvidersField.GetValue(_sourceDb); dataProvidersCollection[0] = new OriFakeDataProvider(((FakeDataProvider)dataProvidersCollection[0]).DataStorage); public class OriFakeDataProvider : FakeDataProvider { readonly HashSet<ID> _publishingQueue = new HashSet<ID>(); public OriFakeDataProvider() { } public OriFakeDataProvider(DataStorage dataStorage) : base(dataStorage) { } public override bool AddToPublishQueue(ID itemID, string action, DateTime date, string language, CallContext context) { _publishingQueue.Add(itemID); return true; } public override IDList GetPublishQueue(DateTime from, DateTime to, CallContext context) { return IDList.Build(_publishingQueue.ToArray()); } } Oh, I see now. So you really just need the queue methods to allow to add and get. That's easy :) I am sure it will soon be on the NuGet feed. Stay tuned! Great! If it would also respect date from and to it would be great. Is HashSet suitable for that purpose? Shouldn't it be possible to add the same itemId multiple times? HashSet is not good for implementation. Sitecore can store same itemId to PublishingQueue Now available as v1.3.0. Thanks for an update. I tried v1.3.0 today and there is one small bug in GetPublishQueue method. When the PublishQueue is empty (AddToPublishQueue was not called yet) it throws System.ArgumentNullException: Value cannot be null. Parameter name: source at System.Linq.Enumerable.Where[TSource](IEnumerable1 source, Func2 predicate) at Sitecore.FakeDb.Data.DataProviders.FakeDataProvider.GetPublishQueue(DateTime from, DateTime to, CallContext context) this.publishQueue.Value is not initialized yet. Please fix it. Thanks Thanks @petrformanek. Fixed in v1.3.1.
2025-04-01T06:40:22.496464
2018-04-13T18:45:10
314212030
{ "authors": [ "christophwolff", "serhii-londar" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10605", "repo": "serhii-londar/open-source-mac-os-apps", "url": "https://github.com/serhii-londar/open-source-mac-os-apps/pull/97" }
gharchive/pull-request
Add Hyper to terminal section Project URL https://github.com/zeit/hyper Category Terminal Description Add hyper to terminal section Why it should be included to Awesome macOS open source applications (optional) Because its awesome and fits right in Checklist [x] Only one project/change is in this pull request [x] Addition in chronological order (bottom of category) [x] Appropriate language icon(s) added if applicable [x] Has a commit from less than 2 years ago [x] Has a clear README in English @christophwolff Thanks for your contribution! :heart:
2025-04-01T06:40:22.510579
2023-08-23T13:04:29
1863297365
{ "authors": [ "Valerio81", "ckadluba" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10606", "repo": "serilog-mssql/serilog-sinks-mssqlserver", "url": "https://github.com/serilog-mssql/serilog-sinks-mssqlserver/issues/483" }
gharchive/issue
Sinks.MSSqlServer bug with SQL Server port on connection string Bug Report When Serilog try to write into the DB, SQL Server get me the "login failed" error although login data are the same used by the application. I have "solved" the problem deleting the port number (1433) from the connectionstring. I don't have SQL Server instances that not using default port so I can't check if this problem occours using other port numbers. Names and versions of all Serilog packages used in the project: Serilog: 3.0.1 Serilog.Sinks.MSSqlServer: 6.3.0 Serilog.Enrichers.EnrichedProperties: 1.0.63 Serilog.Enrichers.Environment: 2.2.0 Serilog.Enrichers.Process: 2.0.2 Serilog.Enrichers.Thread: 3.1.0 Serilog.Exceptions: 8.4.0 Serilog.Extensions.Logging: 7.0.0 Serilog.Settings.Configuration: 7.0.0 Serilog.Sinks.Console: 4.1.0 Serilog.Sinks.EventLog: 3.1.0 Serilog.Sinks.File: 5.0.0 Target framework and operating system: .NET CORE 6 appsettings.json { "ConnectionStrings": { "SqlServer": "Server=<IP_ADDRESS>,1433;Database=MyDb;User ID=MyUser;Password=MyPass;MultipleActiveResultSets=true;Max Pool Size=500;Persist Security Info=True;Encrypt=False;TrustServerCertificate=False;" }, "AppSettings": { }, "Logging": { "LogLevel": { "Default": "Debug", "Microsoft": "Warning", "Microsoft.Hosting.Lifetime": "Information" } }, "Serilog": { "MinimumLevel": "Debug", "Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId", "WithExceptionDetails" ], "WriteTo": [ { "Name": "MSSqlServer", "Args": { "connectionString": "Server=<IP_ADDRESS>,1433;Database=MyDb;User ID=MyUser;Password=MyPass;MultipleActiveResultSets=true;Max Pool Size=500;Persist Security Info=True;Encrypt=False;TrustServerCertificate=False;" "restrictedToMinimumLevel": "Information", "sinkOptionsSection": { "tableName": "Serilog", "schemaName": "dbo", "autoCreateSqlTable": false, "batchPostingLimit": 100 }, "columnOptionsSection": { "removeStandardColumns": [ "MessageTemplate" ], "additionalColumns": [ { "ColumnName": "ApplicationName", "DataType": "nvarchar", "DataLength": 200, "AllowNull": true } ] } } } ], "Properties": { "ApplicationName": "SampleName" } }, "AllowedHosts": "*" } Program.cs [Main] Configuration = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true) .AddEnvironmentVariables() .Build(); Log.Logger = new LoggerConfiguration() .ReadFrom.Configuration(Configuration) .Enrich.WithProperty("ApplicationName", AssemblyName) .CreateLogger(); Hello @Valerio81! I tested with the SQL Server on my local machine using the program in samples/WorkerServiceDemo and had a similar issue. By default the program uses the connection string (appsettings.json). Server=localhost;Database=LogTest;Integrated Security=SSPI;Encrypt=False; This works fine. I checked on which port my SQL Server was listening and found out it is 1434. With the changed connection string Server=localhost,1434;Database=LogTest;Integrated Security=SSPI;Encrypt=False; an error occurred. The following exception was thrown. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No connection could be made because the target machine actively refused it.) Then I changed the connection string to the following but still received the same error. Server=<IP_ADDRESS>;Database=LogTest;Integrated Security=SSPI;Encrypt=False; Then I tried the same with the SQLCMD commandline utility with the same result. As you can see, SQLCMD also can connect when I specify only localhost but not if I specify localhost,1434. Therefore I tink this effect is related to the configuration of my local SQL Server instance and not to the code of the MSSQL sink. Here is a SQL Sever connection troubleshooting guide from Microsoft which mentions this error. https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/connect/network-related-or-instance-specific-error-occurred-while-establishing-connection#step-2-verify-that-the-sql-server-browser-service-is-running I'm closing this issue since it is most definitly not a bug in out code.
2025-04-01T06:40:22.524446
2018-05-23T16:27:23
325782601
{ "authors": [ "SimonKoehl", "inyono" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10607", "repo": "serlo-org/athene2", "url": "https://github.com/serlo-org/athene2/issues/690" }
gharchive/issue
exclude deleted elemets from sorting eg. for this course: https://en.serlo.org/43357/welcome there are several deleted course pages on the sorting page: https://en.serlo.org/entity/link/order/43238/link imo they don't have to be there General information: You get to the specified page by using the settings icon in any course and going to Whole Course > Order children where you can change the order of the children (i.e. course pages in this case) To approach this: The template for the sorting page is defined in https://github.com/serlo-org/athene2/blob/master/src/module/Ui/templates/entity/link/order.twig (i.e. in src/module/Ui/templates and then mostly following the stuff in the url ;)) Currently, we are iterating over all children, https://github.com/serlo-org/athene2/blob/master/src/module/Ui/templates/entity/link/order.twig#L20 We can either filter the children in the templates directly, or (imho better) in its controller: https://github.com/serlo-org/athene2/blob/master/src/module/Entity/src/Entity/Controller/LinkController.php#L83. Closed by #752
2025-04-01T06:40:22.530963
2021-06-17T18:43:50
924229954
{ "authors": [ "hugotiburtino" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10608", "repo": "serlo/infrastructure-modules-api", "url": "https://github.com/serlo/infrastructure-modules-api/pull/7" }
gharchive/pull-request
Draft: Add cache worker as cronjob After adding the cache worker service at the API server, resolves #3 Depends on https://github.com/serlo/api.serlo.org/issues/188
2025-04-01T06:40:22.544862
2017-03-28T06:33:21
217460899
{ "authors": [ "annaisabelleang", "bchr02", "serratus" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10609", "repo": "serratus/quaggaJS", "url": "https://github.com/serratus/quaggaJS/issues/179" }
gharchive/issue
Unable to detect back camera Tested on Samsung A3(2016), it is now unable to detect back camera. It was working just fine 2 weeks ago but, kapoot now. Only managed to detect front facing camera. Downloaded and tested with the updated files and still the same results. Send help. SOS. Unfortunately, I don't have a Samsung A3 at hand. Could you please try to connect your phone to the Chrome Developer Tools, open the console and execute the following code through the console on your phone: navigator.mediaDevices.enumerateDevices() .then(devices => devices.map(device => device.kind)) .then(kind => console.log(kind)) .catch(err => console.error(err)) Make sure you are on a https site (e.g.: https://www.google.com) My output looks like this: ["audioinput", "videoinput", "videoinput", "audiooutput"] Which version of Chrome do you have installed? Thank you for responding. I have no idea what sorcery is this because it is working again. I did not do anything so maybe it is unstable? I am using Chrome 57. On another note, it is not able to default to using the back camera despite the facingmode being set to environment. Following the advice here fixed the issue for me. Basically you need to add the WebRTC adapter either using bower or <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
2025-04-01T06:40:22.589513
2022-02-22T10:03:37
1146698950
{ "authors": [ "TerenzChan", "fwang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10611", "repo": "serverless-stack/serverless-stack", "url": "https://github.com/serverless-stack/serverless-stack/issues/1399" }
gharchive/issue
LiveLambda: Large WebSocket response throw IllegalLocationConstraintException in new regions S3 is used to process large WebSocket responses (e.g. in ws.ts), but the region option is not explicitly set const s3 = new S3();. Therefore IllegalLocationConstraintException was thrown, as that is required for those new regions. Please add an option to allow setting the region option, or pass the region of the stack to the handler. Hi @TerenzChan, I'm taking a look at this. const s3 = new S3(); Are you referencing the stub function here https://github.com/serverless-stack/serverless-stack/blob/master/packages/resources/assets/stub/index.js#L17? Btw which region did you try with? Hi @TerenzChan, I'm taking a look at this. const s3 = new S3(); Are you referencing the stub function here https://github.com/serverless-stack/serverless-stack/blob/master/packages/resources/assets/stub/index.js#L17? Seems like that will exist in those files that used S3. I found it in the stub function you posted and https://github.com/serverless-stack/serverless-stack/blob/master/packages/core/src/runtime/ws.ts#L7 Btw which region did you try with? I used ap-east-1. It looks like the global endpoint of S3 will be used if the region option is not set. And the request will not be redirected to the correct region automatically for those buckets in opt-in regions, which were launched after March 20, 2019. Hi @TerenzChan, this is fixed in v0.66.1. Give it a try and let me know if it works for you. Hi @fwang, It works now. Thanks!
2025-04-01T06:40:22.655412
2024-07-08T12:05:17
2395451121
{ "authors": [ "Kiruel", "SandPod" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10612", "repo": "serverpod/serverpod", "url": "https://github.com/serverpod/serverpod/pull/2449" }
gharchive/pull-request
feat: postgis extension Fix: https://github.com/serverpod/serverpod/issues/2439 [ ] Base query set to accommodate common scenarios (method name in implementation might be different): [ ] Points ordered by distance to input. (This, together with limit should give us k-nearest points) [ ] Points strictly within a certain distance. [ ] Points that intersect geometry. [ ] Index creation —To ensure fast querying, a smart indexing strategy, possibly involving spatial indexes, might be required. [ ] Migration support to add and remove columns. [ ] Serialization/Deserialization of data types to support database integration and communication with the client. [ ] Compatability with unsafe queries to support all unsupported scenarios. [ ] Complete documentation of the feature. [ ] Tests for all additions associated with the feature. Pre-launch Checklist [x] I read the Contribute page and followed the process outlined there for submitting PRs. [x] This update contains only one single feature or bug fix and nothing else. (If you are submitting multiple fixes, please make multiple PRs.) [x] I read and followed the Dart Style Guide and formatted the code with dart format. [x] I listed at least one issue that this PR fixes in the description above. [ ] I updated/added relevant documentation (doc comments with ///), and made sure that the documentation follows the same style as other Serverpod documentation. I checked spelling and grammar. [ ] I added new tests to check the change I am making. [ ] All existing and new tests are passing. [ ] Any breaking changes are documented below. If you need help, consider asking for advice on the discussion board. @SandPod I just add all the code for the column, I'm a bit stuck on the YAML support integration. When you said "implement the validation" inside this file: https://github.com/serverpod/serverpod/blob/main/tools/serverpod_cli/lib/src/analyzer/models/yaml_definitions/class_yaml_definition.dart What do I need to add? Because I saw the "value" of the field is handled here: https://github.com/serverpod/serverpod/blob/9f9f04fad5c628b148f4d02ad289ed079f3fc886/tools/serverpod_cli/lib/src/analyzer/models/validation/restrictions.dart#L1156 But then: https://github.com/serverpod/serverpod/blob/9f9f04fad5c628b148f4d02ad289ed079f3fc886/tools/serverpod_cli/lib/src/analyzer/models/validation/restrictions.dart#L1178 is null. So I got an error message. The parsedModels contain GeographyPoint not geographyPoint so I suppose I need to add somewhere the geographyPoint to pass the restriction? (I'm stuck here). Should I need to create a new: ValidateNode( Keyword.geographyPoint, ), Inside the nested keyword fields? @Kiruel Maybe it helps if I expand a bit on how our analyzer works. Basically, three methods are used in the model_analyzer.dart file. First we run a best effort parsing of the file by calling the extractModelDefinition(...) method, this tries to create a SerializableModelDefinition out of the passed in yaml file. After that we try and resolve any dependencies that might exist between models. For example, you have two models, one of which contains a field and the other a model. This is done in the resolveModelDependencies(...) method. After that, we run our validation, which is done by calling the validateYamlDefinition(...) method. These method needs to be run in this specific order and that is orchestrated in the stateful_analyzer.dart file. An example of this can be seen by inspecting the validateAll(...) method in that file. So, in order for the type validation to work, you first need to make sure that the model parser can parse the new field and that the type is recognized and differentiated from any possible user-defined class. The place that you liked for _isValidType uses a whitelist for the dart types that we support that is defined here: https://github.com/serverpod/serverpod/blob/9f9f04fad5c628b148f4d02ad289ed079f3fc886/tools/serverpod_cli/lib/src/analyzer/models/validation/restrictions.dart#L1135 I'm guessing you are getting stuck on the validation because the new type is not part of the whitelist. Also, we have fantastic unit test coverage for this part of the code. If you check the tests we have created it should be possible TDD your way through the validation code. Some progress, I have passed the validation problem, thanks to the white list, and then I successfully created the GeographyPoint in the protocol.dart but now I'm a bit blocked on the dartType of GeographyPoint. Like you can see the dart file generated: _i2.ColumnDefinition( name: 'point', columnType: _i2.ColumnType.json, isNullable: false, dartType: 'protocol:GeographyPoint', ), But should be _i2.ColumnType.geographyPoint ? Right? I don't see where I need to make the change in the code. If you can tell me where I can find the tests concern this part it would help. More context: The code generated: factory Channel({ int? id, required String name, required String channel, required _i2.GeographyPoint point, }) = _ChannelImpl; factory Channel.fromJson(Map<String, dynamic> jsonSerialization) { return Channel( id: jsonSerialization['id'] as int?, name: jsonSerialization['name'] as String, channel: jsonSerialization['channel'] as String, point: _i2.GeographyPoint.fromJson( (jsonSerialization['point'] as Map<String, dynamic>)), ); } The errors: │  Undefined class 'GeographyPoint'. dart (undefined_class) [29, 14] │ Try changing the name to the name of an existing class, or creating a class with the name 'GeographyPoint'. │  The name 'GeographyPoint' is being referenced through the prefix '_i2', but it isn't defined in any of the libraries imported using that prefix. dart (undefined_prefixed_name) [37, 18] │ Try correcting the prefix or importing the library that defines 'GeographyPoint'. Happy to hear the progress! There are three issues here that are connected to GeographyPoint being defined as a Serverpod model. Having it as a Serverpod model might have some benefits (especially when it comes to serialization) but there is unfortunately no way to get around all the problems. I would recommend defining the GeographyPoint as a concrete class that can be imported directly to both the Server and the Client by default. Such as the Uuid or bytedata that are exported from the Serverpod serialization package: packages/serverpod_serialization/lib/serverpod_serialization.dart. But for the sake of sharing knowledge, I will explain how to get around 2 of the three issues. An unrelated issue I noticed is that that if you look closely on the ColumnDefinition you can see that the columnType is a json while it should be your newly defined geographyPoint. This is something you that will have to be fixed. Getting around 2 out of 3 issues First, you need to change the definition in the channel.spy.yaml so that we reference the GeographyPoint defined in the Serverpod package: ### Represents a chat channel. class: Channel table: channel fields: ### The name of the channel. name: String ### The id of the channel. channel: String point: serverpod:GeographyPoint Next we need to export this model in the server package, we are very restrictive with what we export here, but you will need to add export 'package:serverpod/src/generated/database/geography_point.dart'; to the packages/serverpod/lib/serverpod.dart file. This solves it on the Server side. But now we run into a real problem. The generated client for the Serverpod package is actually the Serverpod service client package. And these are not connected with the import that is generated. If you only allow the point as a server side field all should be green now, but you won't be able to send the points to the client.
2025-04-01T06:40:22.674292
2021-09-15T09:45:21
996875829
{ "authors": [ "jbyers19", "mudit-01" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10616", "repo": "servicemeshinterface/smi-spec", "url": "https://github.com/servicemeshinterface/smi-spec/issues/235" }
gharchive/issue
No warning if incorrect yaml is applied for Traffic Split Describe the proposal For traffic split, if weights are distributed in an improper way like all weights don't sum up to 100 or all weights sum > 100. We don't get an error message that yaml is incorrect. Scope [ ] New specification [ ] Traffic Access Control [ ] Traffic Specs [ ] Traffic Metrics [X] Traffic Split Possible use cases apply the yaml with irregular weights like 1000, -200 etc. the yaml will be applied successfully without error. This should throw an error message. Hi, @mudit-01. This was discussed on the SMI community call today and the consensus was that this proposal is outside the scope of this spec. Validation beyond that enforced by the OpenAPI schema in the TrafficSplit CRD is left to those implmenting this spec. if weights are distributed in an improper way like all weights don't sum up to 100 or all weights sum > 100 The weights are not percentages, so there should be no issue if the weights do not add up to 100.
2025-04-01T06:40:22.820971
2018-02-21T03:45:53
298832562
{ "authors": [ "GenDeathrow", "MokahTGS", "tyler489" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10617", "repo": "setycz/ChickensMod", "url": "https://github.com/setycz/ChickensMod/issues/117" }
gharchive/issue
[1.12.2] The return of Feathergeddon - Baby Chickens spewing feathers! Issue Description: Hatching a baby chicken causes them to spew feathers at an insane rate leading to feathers everywhere. This bug has happened before and gone away on it's own leading me to suspect a mod interaction gone wrong. Scratch head and setup void pipes... ... There have been many reports of this and it would be helpful if someone knew where the old issue where, to see if there is a correlation. Would love to know if others have seen this. This issue was not there in previous versions of our mod collections (not a public pack) but just started recently. Big things that for some reason my gut tells me might be involved: We added Buildcraft, which is what we had in 1.10 when we last saw this issue. This is just my gut theory, so, no real evidence. I will do some testing and report back. Affected Versions (Do not use "latest"): Chickens: 6.0.3 More Chickens: 3.1.0 Hatchery: 2.1.2 Roost: 1.2.0 Minecraft: 1.12.2 Forge: 2611 Initial testing of removing buildcraft: Newly hatched baby chickens did not spew feathers when Buildcraft was not present. Putting Buildcraft back into the mix caused then newly hatched baby chickens to spew feathers. Using the Hatchery nest to hatch a chicken produces a baby chicken that does not spew feathers. Both of the above chickens are Lapis Blue Chickens with the 1,1,1 stats. Bread chickens with stats or other chickens seem to hatch spewing feathers however. how fast do they spew feathers.. at what rate? Very fast... They are pouring out. We have an advanced item collector setup and a trash can but if we have multie babies it will overload the system #89 Is botania installed? Or quark public ItemStack createDropStack() { Item item = rand.nextInt(3) > 0 ? Items.EGG : Items.FEATHER; return new ItemStack(item, 1);``` https://github.com/timrwood/roost/blob/5db487ae5c0cc54e78ef7246c45ac570f0bb6901/src/main/java/com/timwoodcreates/roost/data/DataChickenVanilla.java Both botania and quark have that option turned off. Always have. Actually botania removed that from the 1.12.2 version a long time ago. This might be preliminary, but Terraqueous has an option for feather drops from chickens that we turned off. Feathers seem to have stopped spewing. I'm reporting this to them to see if they have something in their code that is causing issues. Hope maybe we can narrow it down with this. @MokahTGS Thanks for doing some heavy testing with this. We have been running all night with no feather spewing. This might be it. It would be nice if someone else could confirm this with another setup.
2025-04-01T06:40:22.831450
2018-05-04T23:21:11
320456560
{ "authors": [ "doffen", "sevagh" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10618", "repo": "sevagh/pitch-detection", "url": "https://github.com/sevagh/pitch-detection/issues/32" }
gharchive/issue
Portable version of MPM Could you make a portable version of MPM that is pure C/C++ without linking to an FFT library? Maybe using KissFFT (https://sourceforge.net/projects/kissfft/files/kissfft/v1_3_0), or using time-domain based autocorrelation instead of FFT (not sure which approach would execute the autocorrelation faster)? https://github.com/sevagh/pitch-detection/blob/master/src/mpm.cpp#L55 Time-domain autocorrelation is slower than using KissFFT but it's the easiest solution for now. I don't have the desire to play with Yet Another FFT Library.
2025-04-01T06:40:22.837867
2023-06-25T08:29:07
1773137312
{ "authors": [ "mensa84" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10619", "repo": "seydx/homebridge-bravia-tvos", "url": "https://github.com/seydx/homebridge-bravia-tvos/issues/211" }
gharchive/issue
Error: connect EHOSTUNREACH Hello, thanks for that plugin, it is working fine with my KD65-AF8 in the living room. But with the older KD in the bedroom I am getting that log entries every few seconds when it is powered off, although I have configured it as "old" device correctly (I think). When it's powered on, everything is fine. Is there any way how I may fix that? Both are connected via WiFi. [6/25/2023, 10:25:30 AM] [BraviaTVOS] Error: Error: connect EHOSTUNREACH <IP_ADDRESS>:80 at ServiceProtocol.describe (/usr/lib/node_modules/homebridge-bravia-tvos/node_modules/@seydx/bravia/lib/service.js:68:15) at processTicksAndRejections (node:internal/process/task_queues:95:5) at /usr/lib/node_modules/homebridge-bravia-tvos/node_modules/@seydx/bravia/lib/bravia.js:257:22 at async Promise.all (index 0) at Bravia.describe (/usr/lib/node_modules/homebridge-bravia-tvos/node_modules/@seydx/bravia/lib/bravia.js:255:27) at Bravia.exec (/usr/lib/node_modules/homebridge-bravia-tvos/node_modules/@seydx/bravia/lib/bravia.js:289:28) at Handler.getTelevisionState (/usr/lib/node_modules/homebridge-bravia-tvos/src/accessories/accessory.handler.js:18:24) at Handler.poll (/usr/lib/node_modules/homebridge-bravia-tvos/src/accessories/accessory.handler.js:559:7) "name": "BraviaTVOS", "platform": "BraviaOSPlatform", "debug": true, "warn": true, "error": true, "extendedError": true, "tvs": [ { "active": true, "name": "Wohnzimmer TV", "ip": "<IP_ADDRESS>", "port": 80, "psk": "03145154", "manufacturer": "Sony", "model": "Bravia", "serialNumber": "00000000", "refreshInputs": false, "wol": false, "oldModel": false, "polling": 10, "sheduledRefresh": 12, "inputs": [ { "name": "Apple TV", "identifier": "Wohnzimmer App", "source": "cec" }, { "name": "PlayStation 5", "identifier": "PlayStation 5", "source": "cec" } ], "displayOrder": [ "inputs", "apps", "channels", "commands", "macros" ], "speaker": { "active": false, "output": "speaker", "increaseBy": 1, "reduceBy": 1, "accType": "lightbulb" }, "commands": [], "macros": [], "remote": [] }, { "active": true, "name": "Schlafzimmer TV", "ip": "<IP_ADDRESS>", "mac": "14:2d:27:0b:de:33", "port": 80, "psk": "03145154", "manufacturer": "Sony", "model": "Bravia", "serialNumber": "00000000", "refreshInputs": false, "wol": true, "oldModel": true, "polling": 10, "sheduledRefresh": 12, "displayOrder": [ "inputs", "apps", "channels", "commands", "macros" ], "speaker": { "active": false, "output": "speaker", "increaseBy": 1, "reduceBy": 1, "accType": "lightbulb" }, "channels": [], "commands": [], "macros": [], "remote": [] } Plugin not maintained anymore???
2025-04-01T06:40:22.859427
2022-02-05T01:05:55
1124755465
{ "authors": [ "andycylam", "ideabucket", "rank111" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10620", "repo": "seydx/homebridge-philipsair-platform", "url": "https://github.com/seydx/homebridge-philipsair-platform/issues/40" }
gharchive/issue
[Multiple BUGs] - Not all speed options available [Model Phillips AC4236/10] Overview - The slider problem The device AC4236/10 has the following speed options (don't ask me why): Speed options day mode: Automatic mode Speed option 1 (~ at 15% of full speed) Speed option 2 (~ at 40% of full speed) Speed option 3 | turbo mode (full speed) Speed options night mode: Silent mode Allergic sleep mode The "problem(s)" [ ] In the Homekit accessory just two speed options (out of 3, respectively 4 without night mode) are available via the fan speed slider. Speed option 1 & 2 can be controlled via the slider. However, at 100% fan speed (Homekit) the air purifier only works at speed option 2. [ ] Secondly, automatic mode cannot be started via the manual/automatic switch either. By using this switch nothing happens to the settings of the air purifier. [ ] As mentioned in previous bug reports Pre filter's & Active carbon filter's "Filter Life Level" exceeds the maximum level of 100, I don't personally care about this, but now that I'm here... [ ] I don't know if it's intended that only pm2.5 is shown in the Homekit air quality badged, maybe this is something to look into later. Again it's not an important feature I guess, but data is data 💯 Expected behavior Since Philips air purifiers have different speed options in older and newer models this may be the cause. At 80% or at least over 51% the air purifier should work at maximum speed. The automatic mode should be working via manual/automatic switch. The solution Activating debbuger mode shows the following API output of my device with Homekit device slider at 100%: [DEBUG] Philips: {"name": "XXXX", "type": "AC4236", "modelid": "AC4236/14", "swversion": "Ms4406", "language": "EN", "DeviceVersion": "4.0.6", "range": "MarteHE", "Runtime": 7517029, "WifiVersion"<EMAIL_ADDRESS>"ProductId": "XXXXX", "DeviceId": "XXXXX", "StatusType": "control", "ConnectType": "Online", "om": "2", "pwr": "1", "cl": false, "aqil": 100, "uil": "1", "uaset": "P", "mode": "M", "pm25": 1, "iaql": 1, "aqit": 7, "tvoc": 1, "ddp": "1", "rddp": "1", "err": 0, "fltt1": "A3", "fltt2": "none", "fltsts0": 662, "fltsts1": 4800, "fltsts2": 65535, "filna": "AC3036", "filid": "XXXXX", "flttotal0": 720, "flttotal1": 4800, "flttotal2": 65535} At least for my device these are the correct strings: Turbo mode (day): {"name": "XXXX", ...., "om": "t", "pwr": "1", "cl": false, "aqil": 100, "uil": "1", "uaset": "P", "mode": "T", ...} "om": "t", "mode": "T" Automatic mode (day): {"name": "XXXX", ...., "om": "s", "pwr": "1", "cl": false, "aqil": 100, "uil": "1", "uaset": "P", "mode": "AG", ....} "om": "s", "mode": "AG" Silent mode (night) {"name": "XXXX", ...., "om": "s", "pwr": "1", "cl": false, "aqil": 0, "uil": "0", "uaset": "P", "mode": "S", ....} "om": "s", "mode": "S" Allergic sleep mode (night) {"name": "XXXX", ...., "om": "as", "pwr": "1", "cl": false, "aqil": 0, "uil": "0", "uaset": "P", "mode": "AS", ....} "om": "as", "mode": "AS" Air quality A further air quality value is "tvoc" (device has no temperature or humidity sensors). Filter Life Level As with everyone else, the problem with fltsts[0-2]life divisor arises for me. The solution has already been presented with flttotal[0-2]... My device does not have any second filter. So the assumption of @ideabucket could be correct. Environment Node.js Version: v12.14.1 iOS Version: v15.2.1 Conclusion Please don't get me wrong, your plugin is great and super responsive. Thanks for all your kick-ass work in this project. 👍 🥇 Maybe you can implement these small improvements in your plugin e.g. by getting "type" or "modelid". Thanks a lot! I've been trying to figure out how to model all the possible config states of recent purifier models using HomeKit's rather limited set of characteristics off and on for a while now, so I thought I'd chime in and braindump what I have so far. As best I can break it down there are four different characteristics that need to be modeled: Mode: Day or Night Auto speed: On or Off (only meaningful when Mode == Day) Fan speed: 1, 2, Turbo (only meaningful when Mode == Day) Allergy Sleep mode: On or Off (only meaningful when Mode == Night). One option would be to model all five of the speed options as percentages on the RotationSpeed characteristic: 20%: Sleep 40%: Allergy Sleep 60%: Day 1 80%: Day 2 100%: Day Turbo Then you just need a TargetAirPurifierState characteristic to differentiate between Auto and Manual. However I think this is a bad idea; first, it's brittle (what happens if there's a new model with more than three Day speeds?), and second, there's no way of communicating the speed scale in the Home app UI since there's no way I'm aware of to give RotationSpeed levels names. IMO a better approach would be something similar to what's done in lukasroegner/homebridge-dyson-pure-cool, and expose extra switch accessories to control night mode, so you'd have: Auto or Manual (TargetAirPurifierState characteristic) Night mode (Switch service) Allergy Sleep in Night Mode (Switch service) Fan speed (RotationSpeed characteristic with three steps: 33%, 66%, and 100%) This is what I plan to try to put together as a PR if I get the chance, but I'd welcome somebody else stealing the idea and doing all the work instead… My om is set to a and this plugin always says This plugin generated a warning from the characteristic 'Rotation Speed': characteristic value expected valid finite number and received "NaN" (number). See https://homebridge.io/w/JtMGR for more info.
2025-04-01T06:40:22.874700
2020-12-27T23:07:03
775124513
{ "authors": [ "bbarker" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10621", "repo": "sfackler/fallible-streaming-iterator", "url": "https://github.com/sfackler/fallible-streaming-iterator/issues/4" }
gharchive/issue
Implement fold* methods from Iterator? Hello, Just curious, would it be possible to implement these? I can give it a go if you prefer. or maybe try_fold
2025-04-01T06:40:22.886788
2021-10-11T01:14:01
1022153100
{ "authors": [ "sfackler" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10622", "repo": "sfackler/serde-smile", "url": "https://github.com/sfackler/serde-smile/issues/1" }
gharchive/issue
Add BigInteger and BigDecimal types There types aren't natively supported by Serde, but we can use the same approach used by crates like serde_cbor or toml and define "magic" types that can be serialized to and from the native Smile types. BigInteger is implemented in e71ae9cffdba150c4465485cfcbbffd71dc39cbc.
2025-04-01T06:40:22.900620
2015-03-30T05:13:32
65136951
{ "authors": [ "TangMonk", "mshibuya" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10623", "repo": "sferik/rails_admin", "url": "https://github.com/sferik/rails_admin/issues/2232" }
gharchive/issue
table list page css class name conflict I have a model called ad, and have 1 record exist on database. But it do not show on list page: Notice the count. If I change the tr tag class name ad_row to whatever_row, it appeared Some kind of an ad-blocker or user agent stylesheet might be affecting this. Try turning them off and see if it works. @mshibuya thanks
2025-04-01T06:40:22.914689
2024-02-28T08:09:36
2158357085
{ "authors": [ "BabyChouSr", "merrymercy", "zhyncs" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10624", "repo": "sgl-project/sglang", "url": "https://github.com/sgl-project/sglang/pull/244" }
gharchive/pull-request
[WIP] Add Qwen-VL Add Qwen-VL model to SGLang. Rough roadmap: [ ] Write out classes and test model loading [ ] Test out CLIPVisionModel forward and Cross attention forward [ ] Test entire LVM output @BabyChouSr Any progress on this? Is the model ready? @BabyChouSr Hi may you update the latest status? When is this PR expected to be ready? Due to the lack of response, I'm closing this work-in-progress PR for now. It can be reopened later if needed. Thank you!
2025-04-01T06:40:22.968252
2024-02-29T23:13:57
2162252854
{ "authors": [ "scala-steward" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10625", "repo": "shadaj/slinky", "url": "https://github.com/shadaj/slinky/pull/700" }
gharchive/pull-request
Update scala-compiler, scala-library, ... to 2.13.13 About this PR 📦 Updates org.scala-lang:scala-compiler org.scala-lang:scala-library org.scala-lang:scala-reflect from 2.13.10 to 2.13.13 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.scala-lang" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "org.scala-lang" } }] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #710.
2025-04-01T06:40:22.974336
2023-03-25T20:24:31
1640650025
{ "authors": [ "brrock", "imaprototype", "jonathanquaade", "leonardomjq", "muke5hy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10626", "repo": "shadcn-ui/taxonomy", "url": "https://github.com/shadcn-ui/taxonomy/issues/73" }
gharchive/issue
Error: Invalid Tailwind CSS classnames order tailwindcss/classnames-order Failed to compile. 01:38:16.130 |   01:38:16.130 | ./components/card.tsx 01:38:16.130 | 27:14 Error: Invalid Tailwind CSS classnames order tailwindcss/classnames-order 01:38:16.130 |   01:38:16.130 | ./components/ui/command.tsx 01:38:16.131 | 32:18 Error: Invalid Tailwind CSS classnames order tailwindcss/classnames-order 01:38:16.131 |   01:38:16.131 | info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules 01:38:16.248 | error Command failed with exit code 1. 01:38:16.249 | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. 01:38:16.279 | Error: Command "yarn run build" exited with 1 01:38:16.553 | BUILD_UTILS_SPAWN_1: Command "yarn run build" exited with 1 You can use the eslint-plugin-tailwindcss to automatically fix that classnames order error. @shadcn already has the plugin installed Just modify the lint script in package.json like this: "scripts": { "lint": "next lint --fix", }, And then run pnpm run lint. That should fix the classname ordering Any solution to this? The problem is indeed as @Manubi commented You can also add the following line to your package.json "lint:fix": "next lint --fix" and then execute npm run lint:fix /close
2025-04-01T06:40:22.975945
2023-05-25T15:03:16
1726003494
{ "authors": [ "breadadams" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10627", "repo": "shadcn-ui/ui", "url": "https://github.com/shadcn-ui/ui/pull/454" }
gharchive/pull-request
fix: format:write script Whilst running the format:write script I noticed the following error: ERROR run failed: error preparing engine: Could not find the following tasks in project: format Turbo error: error preparing engine: Could not find the following tasks in project: format  ELIFECYCLE  Command failed with exit code 1. The turborepo task name wasn't matching, (format !== format:write). Already resolved in latest version
2025-04-01T06:40:22.977686
2024-03-22T00:55:51
2201472751
{ "authors": [ "csyonghe", "jkwak-work" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10628", "repo": "shader-slang/slang", "url": "https://github.com/shader-slang/slang/issues/3813" }
gharchive/issue
SLANG_COMPILE_FLAG_NO_MANGLING doesn't work as expected When SLANG_COMPILE_FLAG_NO_MANGLING is used, the variable names are expected to have no mangling. But they appear to be mangled. For now, when the option is set, let's leave local variables mangled and only keep the original name for: global variables struct fields synthesized cbuffer struct types.
2025-04-01T06:40:23.022149
2016-02-09T07:20:52
132349555
{ "authors": [ "ShubertYuval", "biswars", "flamedmg", "lovubuntu", "minademian", "shaharke", "sw360cab", "yuval-shubert" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10629", "repo": "shaharke/node-ansible", "url": "https://github.com/shaharke/node-ansible/issues/28" }
gharchive/issue
error when trying run playbook with private key The defect is a double quotes are added to private-key value. a quote is added when using the function - AbstractAnsibleCommand.addPathParam return this.addParamValue(commandParams,** '"' + this.config[param] + '"'**, flag); and another quote is added when using spawn function for executing the command. so the fix might be Eliminate the addition of quotes in addPathParam function. when trying of execute a playbook with private key (with verbose - 'vvvv') i get the following log: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile=""privateKey.pem""' ..... Thanks, Shubert Yuval. What OS are you using? Ubuntu. Ok. On Mac OS it doesn't reproduce. I'll try to run it on Ubuntu. On Tue, Feb 9, 2016 at 10:17 AM ShubertYuval<EMAIL_ADDRESS>wrote: Ubuntu. — Reply to this email directly or view it on GitHub https://github.com/shaharke/node-ansible/issues/28#issuecomment-181755010 . Thanks, how ever node version is - v5.5.0 Huh... interesting. Are you using nvm? On Tue, Feb 9, 2016 at 11:18 AM ShubertYuval<EMAIL_ADDRESS>wrote: Thanks, how ever node version is - v5.5.0 — Reply to this email directly or view it on GitHub https://github.com/shaharke/node-ansible/issues/28#issuecomment-181774116 . nope. Same problem encountered and fixed as per the comment by @yuval-shubert OS Details - Distributor ID: CentOS Description: CentOS release 6.7 (Final) Release: 6.7 Codename: Final Same here, arch linux. I'm using it in Mac and have the same issue. Also saw the PR raised for it but not merged. Is it on hold for some other reason? @lovubuntu I am using node-ansible in production with the patch (I think) you are referring to. I suspect that none is maintaing the project, because that patch works perfectly for me, but I am not skilled enough on this specific project, so there may be other reasons. @yuval-shubert @sw360cab @biswars @flamedmg @lovubuntu I am a new maintainer on this project. I do realize it's been a while since there has been activity on this thread, but is this bug still happening for you? @sw360cab if the fix proposed by @yuval-shubert can be considered as the solution or can become a PM, this issue should be considered definitely solved. hi there, thanks for getting back to me. I will test the branch with the patch and update you in the pull-request.
2025-04-01T06:40:23.037700
2021-12-21T15:47:43
1085970048
{ "authors": [ "avelad", "cdongieux" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10630", "repo": "shaka-project/shaka-player", "url": "https://github.com/shaka-project/shaka-player/issues/3819" }
gharchive/issue
[DASH] PERIOD_FLATTENING_FAILED on specific Manifest Have you read the FAQ and checked for duplicate open issues? Yes What version of Shaka Player are you using? 3.2.1 Can you reproduce the issue with our latest release version? Yes Can you reproduce the issue with the latest code from master? Yes Are you using the demo app or your own custom app? Own custom app If custom app, can you reproduce the issue using our demo app? No, because the stream is DRM-protected and the license request/response has to be wrapped. What browser and OS are you using? Chrome Beta 97.0.4692.20 Chromecast (AndroidTV version, CAST SDK CAF Version: 3.0.0102, CAST SDK MPL Version: <IP_ADDRESS>, Shaka Player 3.0.13) For embedded devices (smart TVs, etc.), what model and firmware version are you using? See above What are the manifest and license server URIs? It is sensitive data, I will send you the Manifest URL by email. What configuration are you using? What is the output of player.getConfiguration()? It is sensitive data, I will send you the Manifest URL by email. What did you do? I just launch my custom app with the given DASH Manifest. What did you expect to happen? I expect the player to play the stream fine. What actually happened? Error code 4037 is thrown, see screenshot. After some research in Shaka Player code, it seem the lib/util/period.js:combine_() function does not play fine with audio tracks in the Manifest. <AdaptationSet id="1014" group="2" segmentAlignment="true" startWithSAP="1" contentType="audio" lang="fra"> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="description"/> <Representation id="1014" bandwidth="64000" codecs="mp4a.40.2" mimeType="audio/mp4"> <SegmentTemplate timescale="90000" initialization="https://media.stream.xxxxx.net/media/0_1_1014_init" media="https://media.stream.xxxxx.net/media/0_1_1014_$Time$"> <SegmentTimeline> <S t="147606703778160" d="288000" r="4502"/> </SegmentTimeline> </SegmentTemplate> </Representation> </AdaptationSet> <AdaptationSet id="1015" group="2" segmentAlignment="true" startWithSAP="1" contentType="audio" lang="und"> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="main"/> <Representation id="1015" bandwidth="64000" codecs="mp4a.40.2" mimeType="audio/mp4"> <SegmentTemplate timescale="90000" initialization="https://media.stream.xxxxx.net/media/0_1_1015_init" media="https://media.stream.xxxxx.net/media/0_1_1015_$Time$"> <SegmentTimeline> <S t="147606703778160" d="288000" r="4502"/> </SegmentTimeline> </SegmentTemplate> </Representation> </AdaptationSet> <AdaptationSet id="1016" group="2" segmentAlignment="true" startWithSAP="1" contentType="audio" lang="fra"> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="main"/> <Representation id="1016" bandwidth="64000" codecs="mp4a.40.2" mimeType="audio/mp4"> <SegmentTemplate timescale="90000" initialization="https://media.stream.xxxxx.net/media/0_1_1016_init" media="https://media.stream.xxxxx.net/media/0_1_1016_$Time$"> <SegmentTimeline> <S t="147606703778160" d="288000" r="4502"/> </SegmentTimeline> </SegmentTemplate> </Representation> </AdaptationSet> But the same player plays fine with these audio tracks from another stream: <AdaptationSet id="365" group="2" segmentAlignment="true" startWithSAP="1" contentType="audio" lang="fra"> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="main"/> <Representation id="365" bandwidth="64000" codecs="mp4a.40.2" mimeType="audio/mp4"> <SegmentTemplate timescale="90000" initialization="https://media.stream.xxxx.net/media/0_1_365_init" media="https://media.stream.xxxx.net/media/0_1_365_$Time$"> <SegmentTimeline> <S t="147606699171360" d="288000" r="4502"/> </SegmentTimeline> </SegmentTemplate> </Representation> </AdaptationSet> <AdaptationSet id="366" group="2" segmentAlignment="true" startWithSAP="1" contentType="audio" lang="und"> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="main"/> <Representation id="366" bandwidth="64000" codecs="mp4a.40.2" mimeType="audio/mp4"> <SegmentTemplate timescale="90000" initialization="https://media.stream.xxxx.net/media/0_1_366_init" media="https://media.stream.xxxx.net/media/0_1_366_$Time$"> <SegmentTimeline> <S t="147606699169678" d="288000" r="4502"/> </SegmentTimeline> </SegmentTemplate> </Representation> </AdaptationSet> <AdaptationSet id="465" group="2" segmentAlignment="true" startWithSAP="1" contentType="audio" lang="fra"> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="description"/> <Representation id="465" bandwidth="64000" codecs="mp4a.40.2" mimeType="audio/mp4"> <SegmentTemplate timescale="90000" initialization="https://media.stream.xxxx.net/media/0_1_465_init" media="https://media.stream.xxxx.net/media/0_1_465_$Time$"> <SegmentTimeline> <S t="147606699171360" d="288000" r="4502"/> </SegmentTimeline> </SegmentTemplate> </Representation> </AdaptationSet> Can you test with v4.3.4? Thanks!
2025-04-01T06:40:23.057138
2023-11-16T10:46:56
1996576466
{ "authors": [ "pszemus" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10631", "repo": "shaka-project/shaka-player", "url": "https://github.com/shaka-project/shaka-player/issues/5898" }
gharchive/issue
SegmentTemplate@media not updated after change in DASH manifest Have you read the FAQ and checked for duplicate open issues? yes If the problem is related to FairPlay, have you read the tutorial? not related What version of Shaka Player are you using? v4.5.0 (uncompiled) Can you reproduce the issue with our latest release version? yes Can you reproduce the issue with the latest code from main? did not try Are you using the demo app or your own custom app? demo app What browser and OS are you using? Google Chrome 119 @ Fedora Linux 38 What are the manifest and license server URIs? sent via e-mail to<EMAIL_ADDRESS>What configuration are you using? What is the output of player.getConfiguration()? default demo app configuration What did you do? played the DASH stream What did you expect to happen? I expect the player to amend segments URI after SegmentTeplate@media has been changed in DASH manifest during playback. What actually happened? The player still requests segments with the URI computed during the initial DASH manifest load. Any changes in SegmentTeplate@media during playback are dropped. Shaka Player v3.2.1, which is used in my production environment, updated the URI of the segment after the SegmentTeplate@media was changed during playback. I just checked and the problem still exists in version v4.6.0.
2025-04-01T06:40:23.062196
2024-07-26T08:21:05
2431694953
{ "authors": [ "fakeDev9" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10632", "repo": "shaka-project/shaka-player", "url": "https://github.com/shaka-project/shaka-player/issues/7110" }
gharchive/issue
Bad subtitle rendering HTML Character Entities how to fix Have you read the Tutorials? Yes. Have you read the FAQ and checked for duplicate open issues? Yes. If the question is related to FairPlay, have you read the tutorial? Not related. What version of Shaka Player are you using? 4.10.2 What browser and OS are you using? Latest Chrome. Not reproducible on Safari due to the rendering of Closed Captions being handled by the device directly. Please ask your question i am using XML form and embedded in MP4 and not showing correctly document.addEventListener("DOMContentLoaded", function () { shaka.polyfill.installAll(); const videoElement = document.getElementById("video"); const videoContainer = document.getElementById("video-container"); const player = new shaka.Player(videoElement); window.player = player; const thumbnailContainer = document.getElementById( "thumbnail-container" ); const thumbnail = document.getElementById("thumbnail"); // UI setup const ui = new shaka.ui.Overlay(player, videoContainer, videoElement); const controls = ui.getControls(); const defaultConfig = { controlPanelElements: [ "backward", "play_pause", "forward", "time_and_duration", "spacer", "mute", "volume", // "language", // "text_settings", "overflow_menu", // 3 dots inside quality speed and pip //"playback_rate", // "cast", //"lock", "picture_in_picture", //"quality", "fullscreen", ], overflowMenuButtons: [ "quality", "captions", "language", "playback_rate", ], seekBarColors: { base: "rgba(99, 99, 99, 1)", buffered: "rgba(255, 147, 58, 1)", played: "rgba(255, 147, 58, 1)", }, enableTooltips: true, // useNativeHlsOnSafari: true, // textTrackVisibility: true, // playbackRates: [0.5, 0.75, 1, 1.25, 1.5, 1.75, 2], // fastForwardRates: [2, 4, 8, 1], // rewindRates: [-1, -2, -4, -8], // customContextMenu: true, // contextMenuElements: ["statistics"], // statisticsList: ["width", "height", "playTime", "bufferingTime"], }; ui.configure(defaultConfig); ui.getControls(); async function loadVideo() { try { const url = "myVideo.mpd"; await player.load(url); await player.setTextTrackVisibility(true); } catch (error) { console.error("Error loading manifest:", error); } } loadVideo(); }); how to fix
2025-04-01T06:40:23.065097
2016-05-05T18:18:46
153291235
{ "authors": [ "justin808", "murtali", "thewoolleyman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10633", "repo": "shakacode/react_on_rails", "url": "https://github.com/shakacode/react_on_rails/issues/414" }
gharchive/issue
Upgrading from 3.0 to 5.0 Wanted to know the best steps to upgrade from 3.0 to 5.0. I didn't see any guides for 3,4 or 5 in the changelog.md (like you had for 1->2 or 2->3) I've already done the gem update but wasn't clear on changes to get the latest react / react router or any other changes in /client Thanks You can find out a lot by looking at the history and latest state of the https://github.com/shakacode/react-webpack-rails-tutorial example app. E.g. if you look at the git history of when the gem/npm module was bumped you can see what changed at the same time. @murtali that means you didn't have to do anything. If you want to use the new features, that's up to you. There might be some deprecations.
2025-04-01T06:40:23.078527
2017-03-30T08:17:10
218115971
{ "authors": [ "justin808", "udovenko" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10634", "repo": "shakacode/react_on_rails", "url": "https://github.com/shakacode/react_on_rails/issues/780" }
gharchive/issue
Clear shared redux stores when server rendering Per https://github.com/shakacode/react_on_rails/issues/774#issuecomment-290328842, we should clear the redux stores before populating it. The potential bug is that there are some stores with data from unrelated requests sitting around. This can chew up memory and lead to other unexpected consequences. Steps to fix: Add a function in the JS library to clear all shared redux stores. Call that function from https://github.com/shakacode/react_on_rails/blob/master/app/helpers/react_on_rails_helper.rb#L346, prepending this to the result. See #774 for more details. @udovenko indicated interest, so I'll flag this one as taken. @justin808 Not sure I see the way to reset the store. Each project has its own set of reducers and its own logic for bootstrapping the data to store. So resetting the store state should be individual for each JS server bundle. Just clear the global map. https://github.com/shakacode/react_on_rails/blob/master/node_package/src/StoreRegistry.js#L4 hydratedStores.clear() or something like that... You mean clear stores registration completely? Feels like we will loose some benefits form having initiated stores on the next request... For the reasons you raised, we don't want any hydrated stores on the next call. Ok, I'll see what I can do. Can't promise anything for sure unfortunatenly... @justin808 I've forked them gem and tried to reproduce an issue. But it seems that ReactOnRails creates and re-registers Redux store before every component is renderd on the server. It allways calls storeGenerator(reduxProps, railsContext) on https://github.com/shakacode/react_on_rails/blob/master/app/helpers/react_on_rails_helper.rb#L345. Store generator returns new store created with combineReducers and createStore. So it seems store state cannot be shared across components render on the server. To test it a bit more I added consoel.log('new store!') to https://github.com/shakacode/react_on_rails/blob/master/spec/dummy/client/app/stores/SharedReduxStore.jsx#L12. Here is a console replay from server for http://<IP_ADDRESS>:5000/server_side_hello_world_shared_store: [SERVER] new store! [SERVER] RENDERED ReduxSharedStoreApp to dom node with id: ReduxSharedStoreApp-react-component-0 with railsContext: {"inMailer":false,"i18nLocale":"en","i18nDefaultLocale":"en","href":"http://<IP_ADDRESS>:5000/server_side_hello_world_shared_store","location":"/server_side_hello_world_shared_store","scheme":"http","host":"<IP_ADDRESS>","port":5000,"pathname":"/server_side_hello_world_shared_store","search":null,"httpAcceptLanguage":"ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4","somethingUseful":"REALLY USEFUL","serverSide":true} [SERVER] This is a script:"</div>"(/script> <script>alert('WTF1')(/script> [SERVER] Script2:"</div>"(/script xx> <script>alert('WTF2')(/script xx> [SERVER] Script3:"</div>"(/script xx> <script>alert('WTF3')(/script xx> [SERVER] Script4"</div>"(/script <script>alert('WTF4')(/script> [SERVER] Script5:"</div>"(/script> <script>alert('WTF5')(/script> [SERVER] railsContext.serverSide is truehttps://github.com/shakacode/react_on_rails/issues/774 [SERVER] new store! [SERVER] RENDERED ReduxSharedStoreApp to dom node with id: ReduxSharedStoreApp-react-component-1 with railsContext: {"inMailer":false,"i18nLocale":"en","i18nDefaultLocale":"en","href":"http://<IP_ADDRESS>:5000/server_side_hello_world_shared_store","location":"/server_side_hello_world_shared_store","scheme":"http","host":"<IP_ADDRESS>","port":5000,"pathname":"/server_side_hello_world_shared_store","search":null,"httpAcceptLanguage":"ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4","somethingUseful":"REALLY USEFUL","serverSide":true} [SERVER] This is a script:"</div>"(/script> <script>alert('WTF1')(/script> [SERVER] Script2:"</div>"(/script xx> <script>alert('WTF2')(/script xx> [SERVER] Script3:"</div>"(/script xx> <script>alert('WTF3')(/script xx> [SERVER] Script4"</div>"(/script <script>alert('WTF4')(/script> [SERVER] Script5:"</div>"(/script> <script>alert('WTF5')(/script> [SERVER] railsContext.serverSide is true Two components were rendered in the same js_context but store was recreated for each render. So it seems I can't reproduce the issue. I think the case I've mentioned in #774 - "As I can remember I've even experienced an issue when store state from previous request was affecting next request. At least I thought that was the reason for my bug that time..." - was when I was creating the store manually from component generator function. If you're insist that the issue still exists, can you please show how to reproduce it? @udovenko For any stores being registered, they are 100% created new. I'm concerned that you could have a store that sticks around from the previous request. And the component rendering code might be accidentaly using that store by a different name. So to reproduce: Register 2 stores, A & B. Run one request that uses store A Run a second request that users store B. In the code that uses store B, print the contents of all stores. You could put in a debug line somewhere that prints out (at least the keys) of hydratedStores from this file. If Store A's values are still there, then we have an issue. Keep in mind that if one is careful to hydrate all the used stores for a given request, then no bug would ever be seen. However, there's certainly some extra memory that could be GC'd. Even if the store is stick around from previous request, https://github.com/shakacode/react_on_rails/blob/master/app/helpers/react_on_rails_helper.rb#L346 will replace it with new instance anyway. Am I missing someting? @justin808 Ok, I see what do you mean. The purpose for cleaning stores left from previous request is generally for GC only. Because they cannot affect components rendering. Or they can affect components rendering due to developer mistake... @justin808 PR #785
2025-04-01T06:40:23.080727
2017-09-10T07:45:42
256492192
{ "authors": [ "chhuang", "justin808" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10635", "repo": "shakacode/react_on_rails", "url": "https://github.com/shakacode/react_on_rails/issues/931" }
gharchive/issue
CSS Modules SSR doesn't output anything I can see the class being applied, but the CSS is not there. I'm using the server config only: https://github.com/shakacode/react-webpack-rails-tutorial/blob/master/client/webpack.server.rails.build.config.js#L55-L65 If you want personalized help on this, please consider our Coaching Plan. Your question is specific to your project rather than this project, so it's best handled privately.
2025-04-01T06:40:23.086374
2017-03-10T23:16:01
213477472
{ "authors": [ "itajaja", "timwis" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10637", "repo": "shama/bel", "url": "https://github.com/shama/bel/issues/66" }
gharchive/issue
spread props say I have a bunch of props to spread like in jsx, how can I achieve that? The following doesn't work const props = `x=${x} y=${y}` return bel`<div ${props} />` I believe you can just do const props = {x: 1, y: 2} return bel`<div ${props} />` Thank you!
2025-04-01T06:40:23.087467
2019-12-18T15:02:39
539736619
{ "authors": [ "erickvils", "rodrigost23" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10638", "repo": "shanalikhan/code-settings-sync", "url": "https://github.com/shanalikhan/code-settings-sync/pull/1105" }
gharchive/pull-request
Update pt-br translation Short description of what this resolves: The current translation to Portuguese (Brazil) is all machine-translated nonsense. I have translated all the strings from the original English file. Nice job, Rodrigo.
2025-04-01T06:40:23.105127
2018-08-23T04:49:04
353223164
{ "authors": [ "baisijian", "terrymanu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10639", "repo": "sharding-sphere/sharding-sphere", "url": "https://github.com/sharding-sphere/sharding-sphere/issues/1173" }
gharchive/issue
why execute time is too long than original datasource Use the same simple sql, original Datasource execute time is about 8ms per request but, replace the datasource to Sharding jdbc, execute time will increase to 27ms per request, i loged the executing time what I found are SS only select from one of the ten tables, SS router time is about 2ms, and SS execute time is about 2ms , what does the other 10ms time cause??? use id hash, 10 sub tables , test sql is SELECT id, name FROM i_test where id = "1" sharding jdbc core 2.0.3 Thank you could you explain why you close by issue so quickly that create by yourself without any reason? the question were fixed by myself, It is not issue of SS, thank you for reply.
2025-04-01T06:40:23.128377
2023-12-30T14:30:04
2060762421
{ "authors": [ "Tamir198", "idodav" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10640", "repo": "sharkio-dev/sharkio", "url": "https://github.com/sharkio-dev/sharkio/issues/358" }
gharchive/issue
Allow copy code snippet with cursor Right now on the website under getting started If I want to copy a text I can do it via the copy button: It would be nice if we could copy the text by marking it with the mouse as well, for example: Thanks Tamir. We need to rewrite the documentation section anyway :) But it's a very good suggestion.
2025-04-01T06:40:23.235732
2016-12-14T11:55:54
195512131
{ "authors": [ "eifelgamedev", "shibdib" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10641", "repo": "shibdib/Dramiel", "url": "https://github.com/shibdib/Dramiel/issues/108" }
gharchive/issue
Cricitcal Error in authcheck.php In authcheck.php the bot tries to get 'corpTicker' from the json but it should be 'ticker' this caused all my tickers to be removed. Fixed it by replacing $corpTicker = (string)$corporationDetails['corpTicker']; on line 335 with: $corpTicker = (string)$corporationDetails['ticker']; Will push something out now thanks!
2025-04-01T06:40:23.274292
2022-12-17T14:10:46
1501404944
{ "authors": [ "AI-Mozi", "yhara" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10642", "repo": "shiika-lang/shiika", "url": "https://github.com/shiika-lang/shiika/pull/442" }
gharchive/pull-request
CI: Add MacOs to runs-on, add rust fmt check #441 ld: library not found for -lm This means clang failed to load libm. Maybe we don't need the -lm option in runner.rs on macs? Merged. Nice try :-)
2025-04-01T06:40:23.297636
2018-10-01T19:58:54
365618695
{ "authors": [ "XamtasticPenn", "ravazquez", "thohemp" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10643", "repo": "shimat/opencvsharp", "url": "https://github.com/shimat/opencvsharp/issues/538" }
gharchive/issue
Getting all connected camera devices Is there any way to get a list of all connected camera devices without using AForge? Since OpenCV is crossplattform these kind of functions are not supported, but maybe someone here has an idea. I can't use AForge due licence issues. I have been using DirectShowLib, however it doesn't really show which cameras are connected rather which camera drivers are installed. In my case it doesn't really matter if the user picks the wrong camera so I didn't try to see if there is a way to check if the camera is connected. Going to check if there is a way to do it though. There is way by using the Microsoft Expression Encoder (https://www.microsoft.com/en-us/download/details.aspx?id=27870) var vidDevices = EncoderDevices.FindDevices(EncoderDeviceType.Video); It's not what I am looking for, but it may help with your issue. I created a simple device enumerator. Next step: Get friendly device names. https://github.com/thohemp/OpenCVSharpCameraDeviceEnumerator The following code gives you a list of connected devices and their supported configurations using MediaFoundation: https://github.com/vvvv/VL.OpenCV/blob/master/src/VideoInInfo.cs Hope that helps So I got around to updating from v.3 => v.4.........now I just lost ALL my Cameras 😢
2025-04-01T06:40:23.353396
2017-05-15T00:11:23
228580536
{ "authors": [ "alandyer", "coveralls" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10644", "repo": "shinyscorpion/wobserver", "url": "https://github.com/shinyscorpion/wobserver/pull/39" }
gharchive/pull-request
make '/remote/system returns 200 test' work Hi, just jumped in and don't know my way around yet but this seems to make the failing test work. Coverage remained the same at 100.0% when pulling 1af1482ea544c44fb767ee4cb6afb674b43ce47e on alandyer:test-failing-0.1.7 into 35a080cf9264de5ccfcf5459b1592d2a92e6005b on shinyscorpion:master. Coverage remained the same at 100.0% when pulling 39b1632d90ab42f77b4c97df2752c63b7554f29a on alandyer:test-failing-0.1.7 into 35a080cf9264de5ccfcf5459b1592d2a92e6005b on shinyscorpion:master.
2025-04-01T06:40:23.355673
2024-09-06T21:00:10
2511187289
{ "authors": [ "tehtnaz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10645", "repo": "shipgirlproject/Shoukaku", "url": "https://github.com/shipgirlproject/Shoukaku/issues/199" }
gharchive/issue
enhancement: Move player to another channel I'm not quite sure if this is already possible, but is there a method to moving the bot / connected player to another channel? As of now it seems, leaveVoiceChannel() and joinVoiceChannel() are the only way exposed in the library (I'm not familiar with the lavalink protocol, so manually doing this with the internal methods is out of reach for me). It would be nice to have an easy to use function like moveVoiceChannel(). Or, any other sort of method to change channels without interrupting the player. (eg. changing "connection.channelId" and calling "updateConnection" or simply connection.update({chanelID: <id>})) UPDATE: After a long while of toying about, I found that using connection.setStateUpdate() to update the channelId, setting the connection to State.RECONNECTING and calling connection.connect() worked! I still wish this was one function though, so I am updating the issue.
2025-04-01T06:40:23.359530
2021-09-17T20:41:07
999705650
{ "authors": [ "adambkaplan", "gabemontero" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10646", "repo": "shipwright-io/operator", "url": "https://github.com/shipwright-io/operator/issues/21" }
gharchive/issue
Use tekton automation to generate operatorhub release artifacts Follow Tekton's example to use automation to generate operatorhub release artifacts @adambkaplan @sbose78 https://github.com/tektoncd/operator/pull/307 Fixed by #27
2025-04-01T06:40:23.361095
2016-11-16T23:24:10
189882013
{ "authors": [ "ehazlett", "tombee" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10647", "repo": "shipyard/shipyard", "url": "https://github.com/shipyard/shipyard/pull/906" }
gharchive/pull-request
[1.12] filtering, warning and formatting fixes Updated reactable to latest version to prevent console warnings Fixed filtering on service inspect page Fixed node column on service inspect page Removed unnecessary console output for auth saga Depends on #904 LGTM
2025-04-01T06:40:23.417069
2022-09-11T08:50:47
1368877968
{ "authors": [ "Axiomatic-Mind", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10648", "repo": "shnewto/bnf", "url": "https://github.com/shnewto/bnf/pull/106" }
gharchive/pull-request
Expose ParseTreeNode Exposes ParseTreeNode to allow full matching of ParseTrees. Before, outputs of parsing couldn't be fully accessed because of the inability of construct variants of ParseTreeNode to match. Now, variants can be constructed and used for matching the ParseTreeNode enum, fully exposing ParseTree. Coverage remained the same at 92.829% when pulling 8b7d754cfd695fae9a40a2171933cbaa172f00db on Axiomatic-Mind:main into 895aaa792c3d2dcb1f60d2db262eef3aab3970d1 on shnewto:main.
2025-04-01T06:40:23.451440
2022-12-21T14:53:04
1506425894
{ "authors": [ "mkucmus", "niklaswolf" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10649", "repo": "shopware/frontends", "url": "https://github.com/shopware/frontends/issues/32" }
gharchive/issue
[Bug] false resolving of home route The resolving of the home route like it is done right now is not really safe. https://github.com/shopware/frontends/blob/acd70e94cd252f05fb8c0f2032bac66b090d1934/packages/composables/src/useNavigationSearch.ts#L16 I see that there is a TODO-comment, but it doesn't really explain what the problem is. For example we have multiple root-categories, one for the "normal" category tree, one for the footer-navigation and so on. In our case, we do not get the correct root-category (as configured in the saleschannel) as the home category, but the footer category. Hey @niklaswolf ! thanks for studying the code 💪🏻 You are right, the problem occures when there are more than one root category in the repository. It's been changed to something more bullet-proof: currently, for / path resolving, which means normally a "homepage", we use sessionContext.value?.salesChannel?.navigationCategoryId from the context. which is a wrapper category for main navigation: https://github.com/shopware/frontends/commit/bf5549885516541e237c60ce1a2fda00080e1aff#diff-4cf41b529fdbc9ede59652256198b99eb400c04017588aaca4c912ce2bf3b7fbR23 Leave your feedback please, if it will work for you 🙏🏻 @mkucmus yes, that's exactly what was needed. With this change, the configuration from the saleschannel is respected. Thanks! :) I'll therefore close this issue.
2025-04-01T06:40:23.458824
2020-10-26T09:20:24
729395238
{ "authors": [ "lars-feyerabend", "lernhart", "mstegmeyer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10650", "repo": "shopware/platform", "url": "https://github.com/shopware/platform/issues/1452" }
gharchive/issue
Redirect loop when changing e-mail during guest checkout Description Steps to reproduce: Put random item into cart Start checkout Choose guest checkout on /checkout/register page On /checkout/confirm notice that your email address is wrong Click browser back button Change email in form Submit Expected behaviour: A new guest account/session is created with the newly entered email and the customer is again on /checkout/confirm Actual behaviour: A redirect loop occurs Possible Solution My proposed solution to break this cycle would be to change the first check in \Shopware\Storefront\Controller\RegisterController::register as follows: if ($context->getCustomer() && !$context->getCustomer()->getGuest()) { return $this->redirectToRoute('frontend.account.home.page'); } A more detailed look into the handling of the various steps with regards to guest account handling couldn't hurt though. Additional context The redirect is triggered in \Shopware\Storefront\Controller\RegisterController::register, where if ($context->getCustomer()) leads to a redirect to frontend.account.home.page regardless of whether it's a guest account or not. \Shopware\Storefront\Controller\AccountProfileController::index(the target action) however starts with $this->denyAccessUnlessLoggedIn(); (default parameter $allowGuest = false), which redirects to the Login page, but \Shopware\Storefront\Controller\AuthController::loginPage again only checks for if ($context->getCustomer()) and redirects back to the account page - the endless loop begins. (The redirectTo parameter of the login page is set to the account page when the \Shopware\Core\Checkout\Cart\Exception\CustomerNotLoggedInException is handled in \Shopware\Storefront\Framework\Routing\StorefrontSubscriber::customerNotLoggedInHandler) Added in Shopware issue tracker: https://issues.shopware.com/issues/NEXT-11790 Hey @lars-feyerabend, thank you for reporting this, and sorry for the late response. I'm closing this issue, as this landed in our triage and we can not reproduce this on current shopware release. This issue has since been fixed by Shopware. https://issues.shopware.com/issues/NEXT-17938
2025-04-01T06:40:23.467220
2021-03-11T09:44:43
828978304
{ "authors": [ "J-Rahe", "netzkollektiv", "shopwareBot" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10651", "repo": "shopware/platform", "url": "https://github.com/shopware/platform/pull/1699" }
gharchive/pull-request
enable custom component config options to be mixed with native fields (custom sort order) 1. Why is this change necessary? Currently if multiple config options are defined as mixed "input-field" and "component", "component"'s always have to be at the end of each card due to the constraints of <xsd:sequence> 2. What does this change do, exactly? The change enables plugin developers to define components and input-fields in arbitrary order. 3. Describe each step to reproduce the issue or behaviour. Define the following config options: <card> <title lang="de-DE">Card Title</title> <input-field type="bool"> <name>firstOption</name> </input-field> <component name="my-component"> <name>secondOption</name> </component> <input-field type="bool"> <name>thirdOption</name> </input-field> </card> You will get the following error: Encountered an error while loading the configuration:Unable to parse file ".../config.xml". Message: [ERROR 1871] Element 'input-field': This element is not expected. Expected is ( component ). (in /var/www/shopware/public/ - line 93, column 0) 4. Please link to the relevant issues (if any). 5. Checklist [ ] I have written tests and verified that they fail without my change [ ] I have squashed any insignificant commits [ ] I have created a changelog file with all necessary information about my changes [] I have written or adjusted the documentation according to my changes [] This change has comments for package types, values, functions, and non-obvious lines of code [x] I have read the contribution requirements and fulfil them. Hey @netzkollektiv, could you also please add an changelog? @J-Rahe feel free to edit the pull request :-) @netzkollektiv if you give me the requried data i may actually do that. if you take a look into our contribution guideline, we NEED your name(can be github name) and email(can be github mail). Without these we cant merge your PR. Please use netzkollektiv & corporate at netzkollektiv.com. Hello, thank you for creating this pull request. I have opened an issue on our Issue Tracker for you. See the issue link: https://issues.shopware.com/issues/NEXT-14211 Please use this issue to track the state of your pull request. Hey @netzkollektiv, thanks for the PR, we have just merged it : ) (And the next time please provide the changelog from the beginning, you have an example and an template in the changelog folder and there is the bin/console changelog:create command to help you a bit).
2025-04-01T06:40:23.487908
2024-07-25T09:56:03
2429540652
{ "authors": [ "AydinHassan", "OliverSkroblin", "svenmuennich" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10652", "repo": "shopware/shopware", "url": "https://github.com/shopware/shopware/pull/3819" }
gharchive/pull-request
Only select used fields of translation tables 1. Why is this change necessary? Previously anytime a translated field was part of a query a full sub-select of the respective _translation table was joined to the main query. A "full sub-select" means a SELECT of all columns of the _translation table. This can result in a lot of overhead because depending on the storage engine of the database, e.g. mediumtext fields like product_translation.description require more page reads (i.e. disk access). In complex, high cardinality queries, especially when they involve e.g. the product table, which doubles the number of joined translation tables due to requiring entity inheritance, this overhead can negatively influence the query execution plans. In fact, the same query (filtering orders based on their line items and respective products) that took a reasonable time in Shopware 6.5 ballooned to several hundred seconds with Shopware 6.6 due to a correction in filter evaluation, which leads to more joins of sub-selects in general. 2. What does this change do, exactly? This PR reduces the selected fields of joined _translation tables to those that are actually needed for filtering or resolving TranslatedFields. It does not affect queries of _translation tables that are needed for loading translation associations. This PR renames fallback_ join aliases in the generated queries to use override_ aliases, because their values in fact override the default translations, if available. This is just cosmetics, but IMHO makes the code (and generated queries) easier to understand. This PR adds some more escaping to the queries that join _translation tables. This PR currently does not add any new tests. ~I would like to get feedback first on the general approach as well as what kind of tests (unit, integration) are expected for these changes.~ All tests added in https://github.com/shopware/shopware/pull/3839 should still pass. 3. Describe each step to reproduce the issue or behaviour. For simply seeing the difference in the generated SQL queries turn on your database's query log and send the following request to the admin API: POST /api/search/product { "filter": [ { "type": "contains", "field": "name", "value": "foo" } ] } Generated SQL (old) SELECT `product`.`id`, `product`.`product_number`, `product`.`auto_increment` FROM `product` LEFT JOIN `product` `product.parent` ON `product`.`parent_id` = `product.parent`.`id` AND `product`.`parent_version_id` = `product.parent`.`version_id` LEFT JOIN ( SELECT `product.translation`.product_id, `product.translation`.product_version_id, `product.translation`.meta_description AS `product.translation.metaDescription`, `product.translation`.name AS `product.translation.name`, `product.translation`.keywords AS `product.translation.keywords`, `product.translation`.description AS `product.translation.description`, `product.translation`.meta_title AS `product.translation.metaTitle`, `product.translation`.pack_unit AS `product.translation.packUnit`, `product.translation`.pack_unit_plural AS `product.translation.packUnitPlural`, `product.translation`.custom_search_keywords AS `product.translation.customSearchKeywords`, `product.translation`.slot_config AS `product.translation.slotConfig`, `product.translation`.custom_fields AS `product.translation.customFields`, `product.translation`.created_at AS `product.translation.createdAt`, `product.translation`.updated_at AS `product.translation.updatedAt`, `product.translation`.product_id AS `product.translation.productId`, `product.translation`.language_id AS `product.translation.languageId`, `product.translation`.product_version_id AS `product.translation.productVersionId`, `product.translation.fallback_1`.meta_description AS `product.translation.fallback_1.metaDescription`, `product.translation.fallback_1`.name AS `product.translation.fallback_1.name`, `product.translation.fallback_1`.keywords AS `product.translation.fallback_1.keywords`, `product.translation.fallback_1`.description AS `product.translation.fallback_1.description`, `product.translation.fallback_1`.meta_title AS `product.translation.fallback_1.metaTitle`, `product.translation.fallback_1`.pack_unit AS `product.translation.fallback_1.packUnit`, `product.translation.fallback_1`.pack_unit_plural AS `product.translation.fallback_1.packUnitPlural`, `product.translation.fallback_1`.custom_search_keywords AS `product.translation.fallback_1.customSearchKeywords`, `product.translation.fallback_1`.slot_config AS `product.translation.fallback_1.slotConfig`, `product.translation.fallback_1`.custom_fields AS `product.translation.fallback_1.customFields`, `product.translation.fallback_1`.created_at AS `product.translation.fallback_1.createdAt`, `product.translation.fallback_1`.updated_at AS `product.translation.fallback_1.updatedAt`, `product.translation.fallback_1`.product_id AS `product.translation.fallback_1.productId`, `product.translation.fallback_1`.language_id AS `product.translation.fallback_1.languageId`, `product.translation.fallback_1`.product_version_id AS `product.translation.fallback_1.productVersionId` FROM `product_translation` `product.translation` LEFT JOIN `product_translation` `product.translation.fallback_1` ON `product.translation`.`product_id` = `product.translation.fallback_1`.`product_id` AND `product.translation.fallback_1`.language_id = :languageId AND `product.translation`.product_version_id = `product.translation.fallback_1`.product_version_id WHERE `product.translation`.language_id = :languageId ) `product.product_translation` ON `product.product_translation`.`product_id` = `product`.`id` AND `product.product_translation`.product_version_id = `product`.version_id LEFT JOIN ( SELECT `product.parent.translation`.product_id, `product.parent.translation`.product_version_id, `product.parent.translation`.meta_description AS `product.parent.translation.metaDescription`, `product.parent.translation`.name AS `product.parent.translation.name`, `product.parent.translation`.keywords AS `product.parent.translation.keywords`, `product.parent.translation`.description AS `product.parent.translation.description`, `product.parent.translation`.meta_title AS `product.parent.translation.metaTitle`, `product.parent.translation`.pack_unit AS `product.parent.translation.packUnit`, `product.parent.translation`.pack_unit_plural AS `product.parent.translation.packUnitPlural`, `product.parent.translation`.custom_search_keywords AS `product.parent.translation.customSearchKeywords`, `product.parent.translation`.slot_config AS `product.parent.translation.slotConfig`, `product.parent.translation`.custom_fields AS `product.parent.translation.customFields`, `product.parent.translation`.created_at AS `product.parent.translation.createdAt`, `product.parent.translation`.updated_at AS `product.parent.translation.updatedAt`, `product.parent.translation`.product_id AS `product.parent.translation.productId`, `product.parent.translation`.language_id AS `product.parent.translation.languageId`, `product.parent.translation`.product_version_id AS `product.parent.translation.productVersionId`, `product.parent.translation.fallback_1`.meta_description AS `product.parent.translation.fallback_1.metaDescription`, `product.parent.translation.fallback_1`.name AS `product.parent.translation.fallback_1.name`, `product.parent.translation.fallback_1`.keywords AS `product.parent.translation.fallback_1.keywords`, `product.parent.translation.fallback_1`.description AS `product.parent.translation.fallback_1.description`, `product.parent.translation.fallback_1`.meta_title AS `product.parent.translation.fallback_1.metaTitle`, `product.parent.translation.fallback_1`.pack_unit AS `product.parent.translation.fallback_1.packUnit`, `product.parent.translation.fallback_1`.pack_unit_plural AS `product.parent.translation.fallback_1.packUnitPlural`, `product.parent.translation.fallback_1`.custom_search_keywords AS `product.parent.translation.fallback_1.customSearchKeywords`, `product.parent.translation.fallback_1`.slot_config AS `product.parent.translation.fallback_1.slotConfig`, `product.parent.translation.fallback_1`.custom_fields AS `product.parent.translation.fallback_1.customFields`, `product.parent.translation.fallback_1`.created_at AS `product.parent.translation.fallback_1.createdAt`, `product.parent.translation.fallback_1`.updated_at AS `product.parent.translation.fallback_1.updatedAt`, `product.parent.translation.fallback_1`.product_id AS `product.parent.translation.fallback_1.productId`, `product.parent.translation.fallback_1`.language_id AS `product.parent.translation.fallback_1.languageId`, `product.parent.translation.fallback_1`.product_version_id AS `product.parent.translation.fallback_1.productVersionId` FROM `product_translation` `product.parent.translation` LEFT JOIN `product_translation` `product.parent.translation.fallback_1` ON `product.parent.translation`.`product_id` = `product.parent.translation.fallback_1`.`product_id` AND `product.parent.translation.fallback_1`.language_id = :languageId AND `product.parent.translation`.product_version_id = `product.parent.translation.fallback_1`.product_version_id WHERE `product.parent.translation`.language_id = :languageId ) `product.product_translation.parent` ON `product.product_translation.parent`.`product_id` = `product.parent`.`id` AND `product.product_translation.parent`.product_version_id = `product.parent`.version_id WHERE (`product`.`version_id` = :versionId) AND (( COALESCE( `product.translation.fallback_1.name`, `product.parent.translation.fallback_1.name`, `product.translation.name`, `product.parent.translation.name` ) LIKE :query )) LIMIT 500 Generated SQL (new) SELECT `product`.`id`, `product`.`product_number`, `product`.`auto_increment` FROM `product` LEFT JOIN `product` `product.parent` ON `product`.`parent_id` = `product.parent`.`id` AND `product`.`parent_version_id` = `product.parent`.`version_id` LEFT JOIN ( SELECT `product.translation`.`product_id`, `product.translation`.`product_version_id`, `product.translation`.`name` AS `product.translation.name`, `product.translation.override_1`.`name` AS `product.translation.override_1.name` FROM `product_translation` `product.translation` LEFT JOIN `product_translation` `product.translation.override_1` ON `product.translation`.`product_id` = `product.translation.override_1`.`product_id` AND `product.translation.override_1`.`language_id` = :languageId AND `product.translation`.`product_version_id` = `product.translation.override_1`.`product_version_id` WHERE `product.translation`.`language_id` = :languageId ) `product.product_translation` ON `product.product_translation`.`product_id` = `product`.`id` AND `product.product_translation`.`product_version_id` = `product`.`version_id` LEFT JOIN ( SELECT `product.parent.translation`.`product_id`, `product.parent.translation`.`product_version_id`, `product.parent.translation`.`name` AS `product.parent.translation.name`, `product.parent.translation.override_1`.`name` AS `product.parent.translation.override_1.name` FROM `product_translation` `product.parent.translation` LEFT JOIN `product_translation` `product.parent.translation.override_1` ON `product.parent.translation`.`product_id` = `product.parent.translation.override_1`.`product_id` AND `product.parent.translation.override_1`.`language_id` = :languageId AND `product.parent.translation`.`product_version_id` = `product.parent.translation.override_1`.`product_version_id` WHERE `product.parent.translation`.`language_id` = :languageId ) `product.parent.product_translation` ON `product.parent.product_translation`.`product_id` = `product.parent`.`id` AND `product.parent.product_translation`.`product_version_id` = `product.parent`.`version_id` WHERE (`product`.`version_id` = :versionId) AND (( COALESCE( `product.translation.override_1.name`, `product.parent.translation.override_1.name`, `product.translation.name`, `product.parent.translation.name` ) LIKE :query )) LIMIT 500 That simple example will usually not show a difference in performance (nor query execution plan). The following request is taken from one of our applications and will drastically improve in execution time with the fix (we noticed ~60x speed up in production environments; 215s (old) vs. 3,5s (new); 20k orders, 80k order line items, 1,5k products, MySQL v8.0.34): POST /api/search/order { "filter": [ { "type": "multi", "operator": "and", "queries": [ { "type": "equals", "field": "lineItems.type", "value": "product" }, { "type": "not", "operator": "and", "queries": [ { "type": "equals", "field": "lineItems.product.id", "value": null } ] }, { "type": "equalsAny", "field": "stateMachineState.technicalName", "value": [ "open", "in_progress" ] }, { "type": "equalsAny", "field": "transactions.stateMachineState.technicalName", "value": [ "paid", "refunded_partially" ] }, { "type": "multi", "operator": "or", "queries": [ { "type": "prefix", "field": "orderNumber", "value": "Foo" }, { "type": "contains", "field": "billingAddress.firstName", "value": "Foo" }, { "type": "contains", "field": "billingAddress.lastName", "value": "Foo" }, { "type": "contains", "field": "billingAddress.company", "value": "Foo" }, { "type": "contains", "field": "billingAddress.department", "value": "Foo" }, { "type": "contains", "field": "billingAddress.street", "value": "Foo" }, { "type": "contains", "field": "billingAddress.additionalAddressLine1", "value": "Foo" }, { "type": "contains", "field": "billingAddress.additionalAddressLine2", "value": "Foo" }, { "type": "contains", "field": "billingAddress.zipcode", "value": "Foo" }, { "type": "contains", "value": "Foo", "field": "billingAddress.city" }, { "type": "contains", "field": "billingAddress.country.name", "value": "Foo" }, { "type": "contains", "field": "customerComment", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.firstName", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.lastName", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.company", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.department", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.street", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.additionalAddressLine1", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.additionalAddressLine2", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.zipcode", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.city", "value": "Foo" }, { "type": "contains", "field": "deliveries.shippingOrderAddress.country.name", "value": "Foo" }, { "field": "deliveries.shippingMethod.name", "type": "contains", "value": "Foo" }, { "type": "contains", "field": "lineItems.product.name", "value": "Foo" }, { "type": "contains", "field": "lineItems.product.productNumber", "value": "Foo" }, { "type": "contains", "field": "lineItems.product.ean", "value": "Foo" }, { "type": "contains", "field": "lineItems.product.manufacturer.name", "value": "Foo" }, { "type": "contains", "field": "lineItems.product.manufacturerNumber", "value": "Foo" } ] } ] } ] } 4. Please link to the relevant issues (if any). n/a 5. Checklist [x] I have rebased my changes to remove merge conflicts [ ] I have written tests and verified that they fail without my change [x] I have created a changelog file with all necessary information about my changes [ ] I have written or adjusted the documentation according to my changes [x] This change has comments for package types, values, functions, and non-obvious lines of code [x] I have read the contribution requirements and fulfil them. @svenmuennich Thanks for the PR, general approach looks fine. We are slightly worried that some breaks might occur so we would appreciate if you tackle the tests and then we can do another round of reviews, wdyt? @svenmuennich yep integration tests would be useful here. Hi, LGTM 👍 It would be good when you test the following scenarios: Filter with multiple fields in different levels of a Criteria Filter with multiple languages and multiple fields Full text search via term Full text search criteria.queries Mixed searches with filters and queries I added the integration tests in a separate PR (https://github.com/shopware/shopware/pull/3839) to ensure that they cover the current behavior. I also rebased this PR accordingly. @svenmuennich could you take a look at the phpstan failures please? Linting looks good now ... Sorry for the delay @svenmuennich - I was on holiday then battling with pipeline issues :D - Thanks for the improvements, it's merged now!
2025-04-01T06:40:23.493129
2024-10-18T12:16:58
2597373974
{ "authors": [ "CLAassistant", "OliverSkroblin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10653", "repo": "shopware/shopware", "url": "https://github.com/shopware/shopware/pull/5169" }
gharchive/pull-request
DRAFT: Try fix partial dataloading The entity hydrator now only checks if criteria.fields is filled. If that's the case, all entities are loaded as partial entities. However, entities are still being loaded and collections filled in the EntityReader. That's why there's also a one-time check here: $partial = !empty($criteria.fields). See: https://github.com/shopware/shopware/issues/5114#issuecomment-2422344507 Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Oliver Skroblin seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T06:40:23.498095
2017-01-18T21:19:18
201695178
{ "authors": [ "htkassner", "klarstil", "m-knabe-reply", "shopwareBot" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10654", "repo": "shopware/shopware", "url": "https://github.com/shopware/shopware/pull/913" }
gharchive/pull-request
[BUGFIX] Use encodeURIComponent to encode semicolons Description encodeURI does not encode semicolons, which are not allowed in cookie values. encodeURIComponents does. Using encodeURI, the cookie will only be stored up to the first semicolon and thus cannot be completely retrieved afterwards. Questions Answers BC breaks? no Tests pass? yes Related tickets? SW-17497 How to test? On javascript console: StorageManager.getLocalStorage().setItem("Foo", "Bar; Baz"); reload the page; StorageManager.getLocalStorage().getItem("Foo"); // -> JSON decoding exception Hey @m-knabe-reply, thanks for your contribution. Your change looks good to me and makes sense. Hello, thank you for creating this pull request. I have opened an issue on our Issue Tracker for you. See the issue link: https://issues.shopware.com/issues/SW-17509. Please use this issue to track the state of your pull request. Thank you for your contribution. This issue was fixed with commit 805c11e. I will close this PR now.
2025-04-01T06:40:23.524989
2024-02-08T04:59:39
2124362851
{ "authors": [ "rukai" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10655", "repo": "shotover/shotover-proxy", "url": "https://github.com/shotover/shotover-proxy/pull/1462" }
gharchive/pull-request
Update metrics crate The shortcut macros like counter! have been removed and the register macros like register_counter! have been renamed like counter!. The register macros can be used in place of the shortcut macros since the shortcut macros were just registering and dropping a metric internally anyway. I am no longer seeing the performance regression on CI or locally since https://github.com/shotover/shotover-proxy/pull/1463 was merged
2025-04-01T06:40:23.528498
2022-06-13T11:07:17
1269274704
{ "authors": [ "dannyhw", "intergalacticspacehighway" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10656", "repo": "showtime-xyz/showtime-frontend", "url": "https://github.com/showtime-xyz/showtime-frontend/pull/1169" }
gharchive/pull-request
Simplify storybook config Why Hey I was checking out the repo after seeing the talk at appjs conf and I came accross some storybook config that I thought could be simplified. Congrats on launching the project, you've made something really cool here! A lot of packages are included by default in addon-react-native-web so you don't need them listed explicitly. I've also changed the addon it to use a partial match for package names, so unless there are showtime packages you don't want to include then just @showtime is enough How I updated the main.js to remove packages from the module to transpile list if they are already included by default Test Plan I ran the storybook after making changes and it seems to run fine. If you have chromatic setup (which I think you do) then the visual tests will be a good way to validate this. Thank you @dannyhw ❤️
2025-04-01T06:40:23.532336
2023-04-28T20:26:59
1689094891
{ "authors": [ "VeryDustyBot", "enitrat" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10657", "repo": "shramee/starklings-cairo1", "url": "https://github.com/shramee/starklings-cairo1/pull/113" }
gharchive/pull-request
feat/interoperability This PR adds a new exercise about contract interoperability. I had to update the compiler version to the last main for it to work. Time spent: 3 hours This item belongs to payment request #19A990 on OnlyDust: from shramee to enitrat 14 items included $500 for 1 day of work
2025-04-01T06:40:23.563120
2018-10-15T11:25:47
370117930
{ "authors": [ "Kitlabs", "ahmedmolawale", "mehmetergul" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10658", "repo": "shts/StoriesProgressView", "url": "https://github.com/shts/StoriesProgressView/issues/16" }
gharchive/issue
Flicker effect when skipping multiple stories Hi Team, First of all great work with the library. I am having a small problem in it. when i am uploading multiple picture to stories like for an example i have 4 picture to stories and out of which 3 have been seen and then i again click on the story section to view the fourth one, then there is a flicker in the rest of the 3 images which has been already read. I don't want this skipping effect ,i want it to start directly from particular story number Below is the video link You can observe the flicker effect when skipping progress bars https://drive.google.com/file/d/1kpYIjovY3n6RLWrXHHrq0gE_ItDrFToU/view?usp=drivesdk @Kitlabs Hi, can you find a solution. I have same problem, can you help please. Thanks. @Kitlabs Are you using a transparent Theme?? Thats likely the cause.
2025-04-01T06:40:23.568258
2017-04-20T15:26:03
223105937
{ "authors": [ "LeviticusMB", "shuGH" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10659", "repo": "shuGH/vscode-table-formatter", "url": "https://github.com/shuGH/vscode-table-formatter/issues/7" }
gharchive/issue
Border-less Markdown tables I love how easy it is to reformat the current table, or all, in a Markdown file, but I'm not too found of the styling. I much prefer borderless tables. Any chance you could add that? There is another table formatter, https://marketplace.visualstudio.com/items?itemName=darkriszty.markdown-table-prettify, which detects the table style automatically. Would be really nice if you could do that too! Thanks for using. And sorry for my late reply. I understood the request. I was unexpected it, but good function! I will consider it. Perhaps, in May as adding config. Thank you for waiting. I released v1.2.1 and added borderless table configration. Try tableformatter.markdown.tableEdgesType = "Borderless". Awesome! I like it! Thanks.
2025-04-01T06:40:23.579334
2022-01-31T15:52:39
1119608066
{ "authors": [ "fapth", "shuchkin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10660", "repo": "shuchkin/simplexlsxgen", "url": "https://github.com/shuchkin/simplexlsxgen/issues/51" }
gharchive/issue
Problem with date year < 1900 and year = 0000 Hello there is a Problem with date's which are smaller then 1900 like '1876-01-30' and also Problem with year "0000-00-00". Last one I fixed it by if ( $year === 0 || (int) $year == 0 ) { return $excelTime; } fixed in 1.0.23 use raw values $cell = "\0".'1985-01-13'; not "german time", there OS date/time/datetime format by default yes I know it's using OS date format. But the Script is changing it automatic. So its Mixing now RAW Types and OS date. So it would be nice to tell the format for both. But this is maybe only an edge case. if u have mixed dates <1900 and >1900, then use raw (string) values for all your dates But the Script is changing it automatic -- setlocale temporary resets LC_NUMERIC to neгtral for correct cast numbers to strings
2025-04-01T06:40:23.628302
2023-11-30T17:46:37
2019204849
{ "authors": [ "OchotonaPrinceps", "antoinersw", "imxtx", "sebaturen", "vladgohn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10661", "repo": "sicxu/Deep3DFaceRecon_pytorch", "url": "https://github.com/sicxu/Deep3DFaceRecon_pytorch/issues/169" }
gharchive/issue
Use this project with recent GPU ? Hello I have a 4080 (sm_89), according to the warning log CUDA is not compatible. Is there a way to use a more recent version of CUDA/pytorch ? I installed the project through the requirement package. it results obviously by a : Regards same problem, no answers In our case, we've opted for switching to a GPU compatible with CUDA11, specifically with version sm_75. While this approach is functional, it's not an optimal solution. Installing and configuring CUDA with a specific version can be challenging. we switch to GTX 20xx I would generally like to run the project somehow, even on a CPU, just to make it work even slowly. do I have a chance to do this? I don't think you can run it on a CPU; TensorFlow requires a GPU. For now, I managed to run the project by changing the GPU, but I'm having issues generating the landmarks. I managed to run it on an RTX4060. It took me a couple hours, but I am happy with the results. I managed to run it on an RTX4060. It took me a couple hours to set up, but I am happy with the results. Nice! Can you tell us how did you solve this problem?
2025-04-01T06:40:23.640283
2019-12-30T04:59:12
543735143
{ "authors": [ "dingyaguang117", "siddhantgoel" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10662", "repo": "siddhantgoel/tornado-sqlalchemy", "url": "https://github.com/siddhantgoel/tornado-sqlalchemy/issues/88" }
gharchive/issue
Should we alway set engine_options.pool_timeout to zero? engine_options = { 'pool_size': 2, 'pool_timeout': 1, 'max_overflow': 0 }, Once the concurrency exceed pool_size+max_overflow, the program will deadlock. I think the reason is the new-coming request blocked at waiting for a connection, Since tornado is single-thread model, this kind blocking will cause deadlock (util timeout ). Below is what I got from gdb, the program is waiting for a connection. Traceback (most recent call first): File "/usr/lib/python3.7/threading.py", line 300, in wait gotit = waiter.acquire(True, timeout) File "/usr/lib/python3.7/site-packages/sqlalchemy/util/queue.py", line 162, in get self.not_empty.wait(remaining) File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 117, in _do_get return self._pool.get(wait, self._timeout) File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 492, in checkout rec = pool._do_get() File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 760, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 363, in connect return _ConnectionFairy._checkout(self) File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect return fn() File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2242, in _contextual_connect self._wrap_pool_connect(self.pool.connect, None), File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 432, in _connection_for_bind conn = bind._contextual_connect() File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 420, in _connection_for_bind conn = self._parent._connection_for_bind(bind, execution_options) File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 313, in connection return self._connection_for_bind(bind, execution_options) File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1604, in _connections_for_states connection = uowtransaction.transaction.connection(base_mapper) File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 374, in _organize_states_for_save base_mapper, uowtransaction, states File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 213, in save_obj ) in _organize_states_for_save(base_mapper, states, uowtransaction): File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute uow, File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute rec.execute(self) File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2577, in _flush flush_context.execute() File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2479, in flush self._flush(objects) File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 482, in _prepare_impl self.session.flush() File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 503, in commit self._prepare_impl() File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1036, in commit self.transaction.commit() File "examples/multi_databases.py", line 63, in get session.commit() File "/usr/lib/python3.7/site-packages/tornado/web.py", line 1699, in _execute result = await result <built-in method run of Context object at remote 0x7fc857aeb910> File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/usr/lib/python3.7/asyncio/base_events.py", line 1771, in _run_once handle._run() File "/usr/lib/python3.7/asyncio/base_events.py", line 534, in run_forever self._run_once() File "/usr/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 148, in start self.asyncio_loop.run_forever() File "examples/multi_databases.py", line 106, in <module> IOLoop.current().start() What happens if you skip pool_timeout altogether and just rely on SQLAlchemy to handle it? The default pool_timeout is 30, after 30s the blocking coroutine will raise Timeout Error and response 500. but the other new-coming request will continue block at waiting for connection. The deadlock will never be broken, unless there are no new-coming requests. True, but if the timeout is disabled, then at some point the database would complain about too many connections and the responses would 500 anyway. My proposal would be to not do anything of our own here, and just keep the SQLAlchemy defaults. Database connection bugs are tricky to debug, so I'd rather not interfere with those settings. :) Yeah, It can't solve the 500 problem. User should decide how to deal with it by themselves.
2025-04-01T06:40:23.689125
2024-01-17T08:13:13
2085645061
{ "authors": [ "HerrNaN" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10663", "repo": "sidusIO/sediment", "url": "https://github.com/sidusIO/sediment/issues/43" }
gharchive/issue
gitconfig includeIf directives does not work properly inside distrobox [includeIf "gitdir:~/work/"] path = work.gitconfig This config should load the config work.gitignore when in a repo which is a subdirectory to ~/work, and it works well on the host. However, it does not work when using git inside a distrobox for some reason. Current solution is to map the distrobox's git to the host's
2025-04-01T06:40:23.699881
2024-07-17T10:33:38
2413248668
{ "authors": [ "chintankavathia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10664", "repo": "siemens/ngx-datatable", "url": "https://github.com/siemens/ngx-datatable/pull/77" }
gharchive/pull-request
fix: tree grouping breaks if rows are assigned statically Fixes the issue where tree grouping breaks if rows are directly set on ngx-datatable without doing any async calls. What kind of change does this PR introduce? (check one with "x") [x] Bugfix [ ] Feature [ ] Code style update (formatting, local variables) [ ] Refactoring (no functional changes, no api changes) [ ] Build related changes [ ] CI related changes [ ] Other... Please describe: What is the current behavior? (You can also link to an open issue here) What is the new behavior? Does this PR introduce a breaking change? (check one with "x") [ ] Yes [x] No If this PR contains a breaking change, please describe the impact and migration path for existing applications: ... Other information: @timowolf Could you please review this PR? Thanks!
2025-04-01T06:40:23.709015
2018-05-28T10:12:11
326973630
{ "authors": [ "UrbanSwati", "nilsburg", "sigalor" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10665", "repo": "sigalor/whatsapp-web-reveng", "url": "https://github.com/sigalor/whatsapp-web-reveng/issues/50" }
gharchive/issue
can't connect: message type: closed Hi, I've been using this project for a while without problems but since the last 2 days I've not been able to connect. I get every time the following message on Chrome console while trying to connect to he API: got message {from: "meta", type: "closed"} Above I get the following message: WebSocket connection to 'ws://localhost:2019/' failed: Connection closed before receiving a handshake response This happens every time I click on the Click to connect to API button. Backend, API server and HTTP server seems to be running fine. Hmm that's odd... Someone else had a similar error at #5 and #48. Which operating system do you use? Might it be a firewall issue for you? Same here, it's been working well but it stopped. getting a ["Cmd",{"type":"update"}] Not getting the ref like usual when trying to generate the QR code @sigalor I am using Ubuntu 17.10 I followed the steps on #5 and everything seems fine. Accesing http://<IP_ADDRESS>:2019/ (<IP_ADDRESS> is the machine I am running the service on) shows a page with the following text: Upgrade Required @UrbanSwati Thanks for this info, now I know the cause of @nilsburg's error. Looking at the generateQRCode function of backend/whatsapp.py, you can see that the script currently wants the server to use WA Web version 0.2.9229, but apparently, the server does not support it anymore. Just exchange 9229 by 9547, because when you look at the websocket log of original WA Web using the Chrome developer console, you can see it requesting version 0.2.9547, thus this is the newest one. A question to everyone: Is it possible to query the server for the newest WA Web version directly, i.e. without possibly receiving the ["Cmd",{"type":"update"}] message beforehand? This would allow the Python script to not rely on using a hardcoded version number. @nilsburg, the Upgrade Required text is absolutely plausible, because a WebSocket server is listening on port 2019. Accessing it with a HTTP request does not really make sense. Hi, Yes! It's now working, but it seems that the JSON object with the messages has changed quite a bit so I will have to adapt my app. Thank you very much for your help! I now realise that the error I was getting on chrome was my fault because the client script was trying to connect to localhost:2019 while the service was launched on another machine. Sorry for the mistake. But updating the WA Web version solved my problem... so thanks again!
2025-04-01T06:40:23.927574
2022-05-10T21:36:09
1231728689
{ "authors": [ "frankTheTank72", "paulpoco", "rogerwim" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10667", "repo": "signum-network/SIPs", "url": "https://github.com/signum-network/SIPs/issues/53" }
gharchive/issue
Fearure Request NFT: add tokens and SIGNA flow to owner Note: I do not know how to format for SIPS. Request NFT: add tokens and SIGNA flow to owner. During the NFT portal screen to add your graphic a) connected XT wallet would know what existing tokens the account has b) a drop down to show what tokens are selectable c) a box for how many tokens to include with NFT minting Have code for the Minted NFT that when SIGNA lands in the smart contract address of the NFT that the new SIGNA gets sent to owner of the NFT. This would be to take an existing token and include it with a unique NFT graphic that says how many tokens are included. And when the token sends out dividends then the NFT smart contract passes the received SIGNA to the NFT owner's SIGNA address. Not sure about royalties or platform fees. This should be in NFT feedback, not SIP's I close the ticket please repon if still needed on https://github.com/signum-nft/feedback-signumart
2025-04-01T06:40:23.941793
2021-03-12T13:47:18
830135219
{ "authors": [ "ahmetb", "dlorenc" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10668", "repo": "sigstore/cosign", "url": "https://github.com/sigstore/cosign/issues/82" }
gharchive/issue
Initial release plans for cosign I reeallly want to try to sign the first release of cosign, with cosign, if we can figure out a way that makes sense. I think we'll have to do it manually, since we won't have a signed release to use in automation yet though. Thankfully we can skip some of the issues with reproducible builds. Here's a rough plan: The trick is that we can use Go to build a reproducible binary. Setup github actions to build and publish a cosign binary on each commit to main cosign builds should be reproducible, with some care. We'll make these build logs public, showing the sha of each binary we build. One or more of us will generate keys specific to this initial release. The more the merrier! We can commit these public keys to a file here in the repo. We can pick a commit to tag our initial release at. We'll have several of us build our own versions of cosign from it, and check the SHAs against the ones from the GitHub action. We can publish those in an issue. If they match, we can tag the release. We'll then sign the first binaries and git tag (v0.1.0)! We'll publish the signatures in Rekor and in the GitHub release. The keys to verify the release will be included in the git repo at the matching commit! After this, we can change our automation to use it's own public/private key-pair (also stored in this repo), and the last signed cosign binary release (v0.1.0) in CI to sign continuous builds. Signatures can get published to Rekor, the build log, and wherever we host the binaries. Actual tagged releases will also be signed by the automation, and one or more maintainers. We'll sign the git commits as well as the resulting binaries. These can get published to the GitHub release. These public keys (and their mapping to maintainers) will also be stored in the repo. We can write a verification script to help people verify. Setup github actions to build and publish a cosign binary on each commit to main Won't this make the repo quite difficult for new contributors trying to clone? Since we wouldn't want to rewrite the history in the future in a project like this, I think this is something to avoid? Won't this make the repo quite difficult for new contributors trying to clone? Since we wouldn't want to rewrite the history in the future in a project like this, I think this is something to avoid? Sorry! I think that wasn't clear. How about: "publish a binary to after each commit to main"? Not things back into main :) I mean we should publish CI builds somewhere (GCS, Github artifacts) after each build. All done here! Release 2 coming soon :)
2025-04-01T06:40:23.951857
2022-04-24T17:36:00
1213717631
{ "authors": [ "codecov-commenter", "vaikas" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10669", "repo": "sigstore/cosign", "url": "https://github.com/sigstore/cosign/pull/1796" }
gharchive/pull-request
Handle context cancelled properly + tests. Signed-off-by: Ville Aikas<EMAIL_ADDRESS> Summary Handle closing of the context properly and return an error for it. Ticket Link Fixes Release Note Codecov Report Merging #1796 (a0df4f9) into main (0c4cf2e) will increase coverage by 0.01%. The diff coverage is 66.66%. @@ Coverage Diff @@ ## main #1796 +/- ## ========================================== + Coverage 32.56% 32.58% +0.01% ========================================== Files 147 147 Lines 9297 9303 +6 ========================================== + Hits 3028 3031 +3 - Misses 5915 5917 +2 - Partials 354 355 +1 Impacted Files Coverage Δ pkg/cosign/kubernetes/webhook/validator.go 75.55% <66.66%> (+0.32%) :arrow_up: pkg/cosign/tuf/client.go 61.68% <0.00%> (-0.82%) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 0c4cf2e...a0df4f9. Read the comment docs.
2025-04-01T06:40:23.964900
2023-01-19T14:29:49
1549276627
{ "authors": [ "kommendorkapten", "vlsi", "woodruffw", "znewman01" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10670", "repo": "sigstore/protobuf-specs", "url": "https://github.com/sigstore/protobuf-specs/issues/61" }
gharchive/issue
Do a JavaScript release See #3. This would entail: setting up an npm project for this repo automating the procedure to generate JS bindings to the protos (Makefile), generally checking in the generated code (gen/ and .github/workflows/) automating/documenting the procedure to do releases of the generated code (RELEASE.md plus any GH Actions) on a tag, preferably releases/js/v0.1.1 or whatever. Then you could start consuming it in sigstore-js. CC @bdehamer @feelepxyz @eddiezane Can we refrain from checking in the generated code? It makes contributions hard as it enforced everybody to install all the tools, and sometimes it causes checking in binary code like in https://github.com/sigstore/protobuf-specs/blob/16541696de137c6281d66d075a4924d9bbd181ff/gen/pb-go/bundle/v1/sigstore_bundle.pb.go#L308-L329 I argued for exactly that in https://github.com/sigstore/protobuf-specs/pull/12 but was outvoted. The main issue is that Go kinda requires it :( I'd prefer not to reopen that can of worms. CC @asraa @kommendorkapten @woodruffw for visibility though If it's substantially more idiomatic we can skip for JS; we already skip it for Java. That said, the only tooling really needed here seems to be Docker and make; are you having issues with those? The main issue is that Go kinda requires it Can Go store the generated source into a release branch then? Frankly speaking, it is annoying that PRs like https://github.com/sigstore/protobuf-specs/pull/64 require updating and committing generated code along with the changes themselves. I'm willing to change my opinion, but I'm still a soft 👎 on removing checked-in codegen for Python: it's relatively idiomatic to do so, and shouldn't have any significant developer burden (since as @znewman01 mentioned everything should be Dockerized). I haven't run into any significant issues with either Go or Python's codegen, despite having some issues with Fulcio's similar codegen (which is not Dockerized). I think it makes sense to keep as-is, as we get to exercise the codegen frequently to avoid any hiccups during a planned release. And as already said, it's dockerized so it should be of minimal extra burden for a contributer (albeit I understand that it's easy to forget to run the code generation and so get some possible unexpected failures). codegen for Python: it's relatively idiomatic to do so It thought Python was using pypi or something like that rather than fetching dependencies from the source code. I haven't run into any significant issues with either Go or Python's codegen I wasn't able to get make to work: https://github.com/sigstore/protobuf-specs/pull/64#issuecomment-1400201925 I might need to try harder, however, previously, docker worked for me. I see how checking in the generated code might be helpful in case the generator is complicated to set up. For instance, suppose there's a "database engine" project, and it might have a generated SQL parser. Suppose, the parser generator is not available on all the platforms, so it might help if the generated parser was committed under source control, so everybody can work on the database without spending time on installing the generator. On the other hand, sigstore/protobuf-specs does not fit that pattern. The only purpose of the repository is to collect .proto files, and there is no other business logic. I am not sure I want to install toolchains (even in Docker) for all the ecosystems. It is more-or-less fine when make fetches docker images for go and python only, however, I am afraid it will go wild as the number of generators increases: go, python, javascript, rust, java, clojure, haskell, c#, elm, you name it. I would refrain from invoking a build that downloads and installs all those dependencies, especially, when the only thing I want is to fix a typo in the proto definition and/or add an annotation. It reduces the likelihood of codegen errors and regressions: a regression in of protoc or betterproto can't result in us silently publishing a broken package I do not see how committing the generated code prevents regressions. I can easily see how running tests with the newly generated code could prevent regressions, however, committing the generated code does not make it tremendously better. It matches "publish what you know": someone who reports a bug in the Python bindings can send us a permalink to the exact line that's causing the error, without having to share their own copy of the generated code. If you want something for documentation purposes, then it might live on a documentation branch/repository.
2025-04-01T06:40:23.969097
2024-01-09T06:48:07
2071733519
{ "authors": [ "loosebazooka", "vlsi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10671", "repo": "sigstore/sigstore-java", "url": "https://github.com/sigstore/sigstore-java/pull/603" }
gharchive/pull-request
chore: remove "Signing publications" from the logging output Summary The message was not adding much, and it was emitted during configuration phase, so it was not even aligned with the actual signing work. Release Note NONE Documentation NONE gotta dco it Frankly, DCO is useless since GitHub's terms of service include already require that any content someone posts on GitHub must comply with the target's license: https://docs.github.com/en/site-policy/github-terms/github-terms-of-service#6-contributions-under-repository-license See https://ben.balter.com/2018/01/02/why-you-probably-shouldnt-add-a-cla-to-your-open-source-project/#if-a-license-isnt-good-enough-for-maintainers-you-shouldnt-subject-users-to-it Can we somehow lift DCO restriction? I'm running into this DCO validation every time, and it does take time to re-commit and re-push Yeah, it's just applied across all sigstore repos. So I dunno :shrug:
2025-04-01T06:40:23.986728
2020-08-20T03:56:29
682391897
{ "authors": [ "12101111", "leshow", "sile" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10672", "repo": "sile/patricia_tree", "url": "https://github.com/sile/patricia_tree/issues/9" }
gharchive/issue
Is it safe to share a read only PatriciaMap across multiple thread? In another word, is it safe to add unsafe impl<V> Sync for Node<V> {} ? I'm not 100% sure but it seems okay to implement Sync for Node<V>. You need to implement it like, unsafe impl<V: Sync> Sync for Node<V> {} unsafe impl<V: Send> Send for Node<V> {} which is something I got wrong initially @leshow You're right. Thank you for your advice!
2025-04-01T06:40:23.993925
2017-02-08T10:41:04
206158602
{ "authors": [ "fabpot", "hkdobrev", "jdreesen", "jlHertel", "skalpa" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10673", "repo": "silexphp/Silex", "url": "https://github.com/silexphp/Silex/pull/1484" }
gharchive/pull-request
Added option validator.mapping.use_annotation Added option validator.mapping.use_annotation to change the default usage of StaticMethodLoader to AnnotationLoader. Added option validator.mapping.cache to allow passing a CacheInterface for usage in the mapping loader. This should simplify the creation of a validator that reads annotations instead of static methods. @SpacePossum, Ok, I will write some tests and a documentation update. I'm not very sure how to use github to do that in this same pull request. Any idea? Or should I make a new pull request? I'm not very sure how to use github to do that in this same pull request. Any idea? Or should I make a new pull request? Just push to your branch and GitHub will update the according pull request automatically ;) Any thoughts on this? Why just add a flag and not introduce a new validator.mapping.loader service that users could override? This would let developers use any of the available loaders This would also let them configure the AnnotationReader they want to use Something like this would be better IMHO: $app['validator.mapping.cache'] = null; $app['validator.mapping.loader'] = function ($app) { return new StaticMethodLoader(); } $app['validator.mapping.class_metadata_factory'] = function ($app) { return new LazyLoadingMetadataFactory($app['validator.mapping.loader'], $app['validator.mapping.cache']); }; If you wish, you could then add an example in the documentation to show how to use the AnnotationLoader by overriding the new service: $app->register(new ValidatorServiceProvider(), array( 'validator.mapping.loader' => function ($app) { return new AnnotationLoader($app['annotations']); }, )); @jlHertel Do you think you'd be able to update this soon and resolve the conflicts? Thanks! I guess this should be closed as well as per https://github.com/silexphp/Silex/issues/1588#issuecomment-368390861 @hkdobrev That's correct. Let's close.
2025-04-01T06:40:23.997683
2016-09-20T05:35:31
177968796
{ "authors": [ "jainanshul" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10674", "repo": "silklabs/node-nnpack", "url": "https://github.com/silklabs/node-nnpack/pull/1" }
gharchive/pull-request
Androidify node-nnpack Tested to make sure that node-nnpack compiles fine for both OSX and Android with this change. I haven't tested running this on an Android device yet. Would merge once I am done testing. I am able to require node-nnpack on the device without any missing dependency errors. Merging this change now so I can import this module as subtree in silk-core.
2025-04-01T06:40:24.003817
2023-03-31T19:12:38
1649885198
{ "authors": [ "chrisvire", "saidbrandonsaid" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10675", "repo": "sillsdev/appbuilder-pwa", "url": "https://github.com/sillsdev/appbuilder-pwa/pull/167" }
gharchive/pull-request
Feature/text selection toolbar Starter copy, bookmark, highlight, and note functionality We merged this code in #170
2025-04-01T06:40:24.023562
2023-02-17T00:54:38
1588594515
{ "authors": [ "Danny6151", "edwardsnj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10676", "repo": "silver-sat/Payload_Board", "url": "https://github.com/silver-sat/Payload_Board/issues/2" }
gharchive/issue
Pin changes There are a couple of problems that need to be resolved with the pins. pins 19 and 23 on the RaspberryPiZero are switched. Pins on the H1 and H2 connectors will have to be updated to match the EPS. These issues are with respect to Release 1.0 Systems have chosen the replacement pins H2-12 -> H2->22 H2-14 -> H2->24 Update: Pins 19 and 23 from RaspberryPiZero to pins 1 and 3 on the max3100(U5) called BCM_11_SCLK and BCM_10_MOSI are switched and need to be fixed
2025-04-01T06:40:24.034156
2017-01-13T03:00:09
200536583
{ "authors": [ "Rudigern", "dhensby", "tractorcow" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10677", "repo": "silverstripe/silverstripe-framework", "url": "https://github.com/silverstripe/silverstripe-framework/issues/6497" }
gharchive/issue
AttributesHTML() not following exclude parameters If you use something like CreditCardField, in the template it has AttributesHTML which have exclude parameters. The attributes as far as I can tell don't make it to the function in FormField.php and so all the attributes are returned which for the case of CreditCardField, breaks the form. Creating a duplicate function and calling it from the template sends only the first attribute. Not too sure whats going on with it but as far as I can tell it's a bug. Tested it on two different SilverStripe installations, currently 3.4.1 Yeah there's a bug in the template. In order to pass parameters to a getter, we need to use the FULL method name. I.e. $getAttributesHTML not $AttributesHTML. You can substitute it in your code as such if you want to test it locally. :) This has been discussed repeatedly - "getters" should not accept parameters. This "bug" can't be fixed in a patch release because the API would need to change in a breaking way (remove getHTMLAttributes) and we're not going to accept sending params to getters via the template. Thanks @tractorcow that worked perfectly. @dhensby While I understand what you're saying with the getters, currently the CreditCardField doesn't work as intended and you have to have your own template to get it to work (it produces invalid HTML currently). Just changing $AttributesHTML to $getAttributesHTML as @tractorcow explains fixes the issue and only changes the template rather than breaking the API. I know my issue raised was related to the "getters" issue however ultimately I just want CreditCardField to work and was trying to be helpful with my debugging findings. I can also do the change if it helps. Sure - if we can fix a core issue here by just amending a core template we should do it. Fixed with https://github.com/silverstripe/silverstripe-framework/pull/6504
2025-04-01T06:40:24.036662
2017-10-12T21:00:51
265084188
{ "authors": [ "flamerohr", "tractorcow" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10678", "repo": "silverstripe/silverstripe-framework", "url": "https://github.com/silverstripe/silverstripe-framework/pull/7477" }
gharchive/pull-request
ENHANCEMENT Allow extensions to intercept incorrect deletes on unpublish The issue with the existing behaviour is that some queried tables are NOT a part of the hierarchy, and the subsequent delete could unintentionally delete objects outside of this object. For instance, fluent uses a localisation table as an inner join. This existing behaviour deletes these locales, but using the ID of the record, not the ID of the localisation (which is a separate ID itself). With the given extension points extensions are able to intercept these queries and rewrite the delete safely without violating database integrity. For instance, this is a page that has three failover locales. When deleted it performs these deletes: "SiteTree_Live, ID = 3" "SiteTree_Localised_Live, ID = 3" "SiteTree_Localised_Live, ID = 3" "SiteTree_Localised_Live, ID = 3" Those following three deletes should be RecordID = 3 not ID = 3. Seems reasonable, and it doesn't change anything if there were no extensions :)
2025-04-01T06:40:24.038836
2019-01-30T14:30:11
404797099
{ "authors": [ "UndefinedOffset" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10679", "repo": "silverstripe/silverstripe-framework", "url": "https://github.com/silverstripe/silverstripe-framework/pull/8772" }
gharchive/pull-request
FIX: Block Manifest of the compatibility class Object and lean on the compatibility auto loader On PHP 7.2 initial dev/build of a SilverStripe 3.7.1 site can throw a Fatal error: Cannot use 'Object' as class name as it is reserved, I experienced this on SilverStripe Platform causing the deployment to fail and rollback. This pull request simply adds a manifest exclude file in core/compat which causes the framework to lean on the auto loader defined in model/fieldtypes/compat/autoload.php instead. Will this make it into the upcoming 3.7.3 release? or will it need to wait until 3.7.4? Trying to plan ahead with one of our projects that's moving to SilverStripe Platform on a stack that's currently configured for PHP 7.2, but without this pull being merged would need to be downgraded to PHP 7.1.
2025-04-01T06:40:24.058293
2016-06-28T10:52:00
162657123
{ "authors": [ "caseyjhol", "nov23" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10680", "repo": "silviomoreto/bootstrap-select", "url": "https://github.com/silviomoreto/bootstrap-select/issues/1432" }
gharchive/issue
style for Placeholder(title) I need to css placehoder on my site (with another coler) but i cant change style because when i select an option it have same color with placeholder See guidelines for contributing. Bug reports A bug is a demonstrable problem that is caused by the code in the repository. Good bug reports are extremely helpful - thank you! Guidelines for bug reports: Use the GitHub issue search. Check if the issue has already been reported. Check if the issue has been fixed. Try to reproduce it using the latest master or development branch in the repository. Provide environment details. Provide your operating system, browser(s), jQuery version, Bootstrap version, and bootstrap-select version. Create an isolated and reproducible test case. Create a reduced test case. Include a live example. Make use of jsFiddle or jsBin to share your isolated test cases. Style via the bs-placeholder class.
2025-04-01T06:40:24.061991
2017-03-07T09:38:51
212377860
{ "authors": [ "PiRK", "alemirone", "t20100" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10681", "repo": "silx-kit/silx", "url": "https://github.com/silx-kit/silx/pull/651" }
gharchive/pull-request
replace=False preserve mask in StackView when updating frame number useful for exploring 3D stacks This will close issue #650 Looks good to me.
2025-04-01T06:40:24.089414
2023-02-05T22:42:53
1571645798
{ "authors": [ "mateoKutnjak", "simolus3" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10682", "repo": "simolus3/drift", "url": "https://github.com/simolus3/drift/issues/2302" }
gharchive/issue
Easy way to determine if two rows have the same columns? I have this method to help me determine whether old row and excluded row have same fields. I am using insert method with DoUpdate.withExcluded parameter which has where clause with (old, excluded) => isDifferent(old, excluded) Is there any way to automatize the process? I am comparing almost all columns with same name of old with excluded. As this way is prone to logical errors that will be hard to detect, can this be done with looping through same-name columns? Maybe comparing them with something like reduce method while excluding some fields that we don't want to be compared? More explanation is available in issue https://github.com/simolus3/drift/issues/2299#issue-1571171981. @override Expression<bool> isDifferent(Users old, Users excluded) { return (old.dateJoined.isExp(excluded.dateJoined) & old.email.isExp(excluded.email) & old.username.isExp(excluded.username) & old.username.isExp(excluded.username) & old.syncState.isExp(excluded.syncState) & old.isGuest.isExp(excluded.isGuest) & old.isVerified.isExp(excluded.isVerified) & old.isDeletedLocally.isExp(excluded.isDeletedLocally)) .not(); } You can always cast from a DSL table instance (Users) to the internal table object (TableInfo). This interface gives you full access to the columns defined in the table, which allows doing reflection over tables: Expression<bool> isDifferent(Users old, Users excluded) { final oldTbl = old as TableInfo; final excludedTbl = excluded as TableInfo; return oldTbl.$columns .where((c) => true) // Skip columns you don't want to compare .map((column) { final excludedColumn = excludedTbl.columnsByName[column.name]!; return column.equalsExp(excludedColumn); }) .reduce((a, b) => a & b) .not(); }
2025-04-01T06:40:24.095856
2024-07-05T15:25:33
2392802970
{ "authors": [ "Guang-B", "simolus3" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10683", "repo": "simolus3/drift", "url": "https://github.com/simolus3/drift/issues/3078" }
gharchive/issue
Migration Error when altered same table twice. I have table called Box . The Scenario : I want to add refferenceId column and add it as uniqueKey on version 2 Then in Version 3 , I need to add another column called refferenceNumber and also add it as uniqueKey. This Is My Migration Strategy : onUpgrade: (Migrator m, int from, int to) async { await transaction(() async { if (from < 2) { await m.alterTable( TableMigration( box, newColumns: [box.refferenceId], ), ); } if (from < 3) { await m.alterTable( TableMigration( box, newColumns: [box.refferenceNumber], ), ); } }); } The Case is : when My App is upgrading from version 1 to 2 and then upgrade it again from version 2 to 3, The Migration Works Fine. But, When My App is directly upgrade from version 1 to 3, i got this error : SqliteException(1): while executing, no such column: refference_number, SQL logic error (code 1): Causing statement: INSERT INTO tmp_for_copy_box ("id", "box_number", "type", "amount", "refference_number") SELECT "id", "box_number", "type", "amount","refference_number" FROM "box";, parameters: So I Assumed that because when the app is upgrading directly from version 1 to 3, the box table is altered twice during onUpgrade Please tell me, is this a bug or there is a mistake in my code. Thank you Indeed, the problem is that alterTable would get called twice when upgrading from 1 to 3 directly. The complicated solution is to account for this in your onUpgrade handler by always upgrading to the latest version in every branch, e.g. if (from < 2) { await m.alterTable( // 1 -> current TableMigration( box, newColumns: [box.refferenceId, box.refferenceNumber], ), ); } else if (from < 3) { await m.alterTable( // 2 -> current TableMigration( box, newColumns: [box.refferenceNumber], ), ); } Of course, writing migrations that just do a single step and then chaining them is much easier and scales better. However, it requires you to actually migrate towards an intermediate, outdated schema (2) in your migration logic. Since box is generated for the current table though, that's not directly possible. To do this, drift provides tools that generate step-by-step migrations based on exported database schemas. To set this up, you'd have to export your old schemas once (you can probably checkout the revision where you've updated the schema and run dart run drift_dev schema dump on that tree). Then, drift can generate all intermediate versions of your schema into a compressed format which provides enough information to write migrations like this: onUpgrade: stepByStep( from1To2: (m, schema) async { await m.alterTable(schema.box, newColumns: [schema.box.refferenceId]); }, from2To3: (m, schema) async { await m.alterTable(schema.box, newColumns: [schema.box.refferenceId]); }, ),
2025-04-01T06:40:24.101159
2020-01-29T07:33:03
556689066
{ "authors": [ "2math", "RakaAlrian", "simolus3" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10684", "repo": "simolus3/moor", "url": "https://github.com/simolus3/moor/issues/364" }
gharchive/issue
moor_generator >=2.0.1 is incompatible with json_serializable ^3.2.5 Please help, I can't start using moor because I got this error when I added moor_generator to pubspec.yaml. thanks Running "flutter pub get" in do_not... Because analyzer_plugin >=0.2.1 depends on analyzer >=0.35.3 <0.39.0 and analyzer_plugin >=0.1.0 <0.2.1 depends on analyzer >=0.35.3 <0.38.0, analyzer_plugin >=0.1.0 requires analyzer >=0.35.3 <0.39.0. And because moor_generator >=2.0.1 depends on analyzer_plugin >=0.1.0 <0.3.0, moor_generator >=2.0.1 requires analyzer >=0.35.3 <0.39.0. And because json_serializable 3.2.5 depends on analyzer ^0.39.0 and no versions of json_serializable match >3.2.5 <4.0.0, moor_generator >=2.0.1 is incompatible with json_serializable ^3.2.5. So, because do_not depends on both json_serializable ^3.2.5 and moor_generator ^2.3.1, version solving failed. pub get failed (1; So, because do_not depends on both json_serializable ^3.2.5 and moor_generator ^2.3.1, version solving failed.) You can use json_serializable: ^3.2.3 until next version if is not important to use 3.2.5 I just released moor and moor_generator version 2.4.0, which support the latest analyzer. You can remove the dependency_overrides section and just depend on ^2.4.0. Let me know if you run into any problems, thanks!
2025-04-01T06:40:24.110968
2023-05-12T19:15:08
1708088496
{ "authors": [ "pchx", "simolus3" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10685", "repo": "simolus3/sqlite3.dart", "url": "https://github.com/simolus3/sqlite3.dart/issues/161" }
gharchive/issue
Support binding user-defined values to statements StatementImplementation.execute() requires that all parameters are provided, but only understands a fixed set of types. It would be nice to add a Bindable interface that makes this extensible, e.g.: +abstract class Bindable { + // Binds this value to a statement as parameter `i`. + void Bind(CommonPreparedStatement stmt, int i); +} ... and in StatementImplementation._bindParam(): + } else if (param is Bindable) { + param.Bind(this, i); This requires that Bind() downcast to PreparedStatement, so maybe Bind() would instead take Pointer but you get the rough idea. I'm not entirely opposed to this, but I'm not super convinced this feature should be part of the sqlite3 package either. In my mind, that package provides a direct wrapper around sqlite3 that looks like Dart. So we have an OOP API based on sound types, but we don't have stuff that sqlite3 doesn't support either (like creating CRUD statements automatically or a direct transaction API). Adding Bindable isn't much, but it also feels a lot like something that a higher-level package should do. Do you have an example where it's considerably more convenient to use Bindable instead of just doing the translation manually? The way I see it, Bindable.Bind would just recursively call stmt.bind to set the translated value at the same index? Are you using a sqlite3 library for another language that has that feature? Ah yeah, that's fair. I wasn't aware of that functionality in sqlite3, but I think we can add basic support for it. (Basic in the sense that it's going to be really low-level, I think even exposing the type pointer as a Dart string would be wrong given how much the document talks about how types must not be dynamic values) Do you also need an API to read pointer values as arguments from user-defined Dart functions? My use case is just to use an extension written in C from Dart, but I guess for completeness you would want to be able to consume/return such values from Dart extensions. In my case there is a C API that knows how to generate such values, so all I need is to get a sqlite_stmt* to pass into that API. My use case is just to use an extension written in C from Dart, but I guess for completeness you would want to be able to consume/return such values from Dart extensions. I agree that a complete solution should have that. But given how rarely the feature is used and how poorly it translates to Dart with dynamic strings, I think it might be better to not have support for pointer passing interfaces at all. From what I can tell, there is no way to bind a user-defined value to a statement at all: you must pass all parameters Yes, the intention here is to avoid bugs caused by parameters that haven't been bound. To support your use case, I've followed a different approach in d3382c5310b63dd70ed7882301607581104279d5. It adds a StatementParameters class which explicitly controls the way parameters are bound. This validation can be completely bypassed by using StatementParameters.bindCustom: stmt.executeWith(StatementParameters.bindCustom((stmt) { stmt as PreparedStatement; callMyNativeFunction(stmt.handle); // stmt.handle is a sqlite_stmt* })); Thanks! I expect many users of this feature would commonly pass a mix of standard and custom types, but this certainly works and is easiest to implement. Appreciate the quick turnaround. :) Just noticed this was committed to the v2 branch and not main, is v2 considered suitable for production use? Right, I've pushed the change on the v2 branch which will become the next stable release but has some features that are still in development. I might have to make breaking changes before the final release. But apart from that, the FFI implementation on the v2 branch is working and I consider it suitable for production if you depend on a fixed version (e.g. sqlite3: 2.0.0-dev.0). If breaking changes happen you'd have to migrate eventually, but so do users who are currently on 1.x of this package. The WASM implementation on the v2 branch is not stable yet. Just as a single point of feedback: I've tried working with this a little bit and think the original proposal may be slightly more usable -- otherwise to bind both custom and standard types in the same statement, you need to obtain a RawSqliteStatement in order to call sqlite3_bind_*. I've added a CustomStatementParameter class in 5ddbf8c7bc7cc0fb21b2c580c9fceb5d8da7127a. It can be used to mix the two modes. So you can do something like execute([123, MyCustomValue()]) where MyCustomValue implements CustomStatementParameter and calls the C function with the statement handle. But there's no need to manually call sqlite3_bind for 123 anymore.
2025-04-01T06:40:24.127558
2019-08-13T14:29:44
480190147
{ "authors": [ "O-heu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10688", "repo": "simondel/sonar-buildbreaker-vsts", "url": "https://github.com/simondel/sonar-buildbreaker-vsts/pull/9" }
gharchive/pull-request
8 Report task file not found changed folder for search Fixed with 'Build break task failing #10'
2025-04-01T06:40:24.157592
2018-12-15T10:16:04
391364605
{ "authors": [ "JMing-Li", "simonvh" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10689", "repo": "simonvh/fluff", "url": "https://github.com/simonvh/fluff/issues/80" }
gharchive/issue
replicable clusters based on k-means Thanks for providing such user-friendly package! The results of feature clusters are not repeatable when I use k-means through fluff heatmap although the results are almost the same each time I run the same command. I guess it's because of k-means method. Is there a way to initialize the basic random number generator ? There is not at the moment, but it's a good point. Essential to have a replicable analysis. I'll add it. Once version 3.0.3 is available, you can use the -S option to set the random seed for K-means clustering.
2025-04-01T06:40:24.158941
2024-02-09T01:04:19
2126307770
{ "authors": [ "simonw" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10690", "repo": "simonw/datasette", "url": "https://github.com/simonw/datasette/issues/2264" }
gharchive/issue
After 1.0a9, datasette-edit-schema to use alter-table permission See: https://github.com/simonw/datasette-edit-schema/issues/56 This is done.
2025-04-01T06:40:24.261583
2024-01-22T20:54:05
2094732666
{ "authors": [ "sinclairems" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10691", "repo": "sinclairems/JavaScript-Quiz", "url": "https://github.com/sinclairems/JavaScript-Quiz/issues/4" }
gharchive/issue
03 - Create Initial JS This is where most of the work will be focused. Throughout this step, I will be updated the html and css to match the js. This is still just an initial version. The goal is for most of the file to be working, however, it's fine if everything does not work at this stage. Everything is not working yet. This initial push was much more basic than I had planned. The next JS issue has more code.
2025-04-01T06:40:24.284868
2019-04-03T06:23:42
428586414
{ "authors": [ "DivineDominion", "sindresorhus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10692", "repo": "sindresorhus/Preferences", "url": "https://github.com/sindresorhus/Preferences/issues/15" }
gharchive/issue
Change window title to Preference Pane title Found this advice in the HIG today: https://developer.apple.com/design/human-interface-guidelines/macos/app-architecture/preferences/ So maybe after #6 is merged, [ ] rename PreferencePane.toolbarTitle to simply title or preferencePaneTitle [ ] use the title for the window title This also makes #12 obsolete. I have seen it, but I have intentionally ignored it. I personally don't think it makes much sense to repeat the tab title in the window title. It's already clear which tab it's on. I was kinda hoping no one would notice 😝 Also, a lot of Apple's apps and even macOS don't follow the HIG entirely. But I guess we should do this to follow the HIG... Update the window's title to reflect the currently visible preference pane. For example, if your preferences window has a General preference pane, the window’s title should be General when that pane is active. This makes the window title unusably generic in the window list. It will just show "General", which does not make it clear that the window is a preference window... We also need to take into account this: If your window doesn’t have multiple preference panes, then its title should be App Name Preferences. Some observations: Finder.app shows "Finder Preferences" as the window title for all the panes. Which is a violation of: Update the window's title to reflect the currently visible preference pane. Messages.app has the minimize button enabled. Which is a violation of: Disable the Minimize and Zoom buttons. Mail.app has the minimize button enabled. Which is a violation of: Disable the Minimize and Zoom buttons. iTunes.app doesn't apply the changes immediately. Which is a violation of: Apply preference changes immediately. Dictionary.app has the zoom button enabled. Which is a violation of: Disable the Minimize and Zoom buttons. Books.app uses the tab title and the word "Preferences" as the window title, for example, "General Preferences". Which is a violation of: For example, if your preferences window has a General preference pane, the window’s title should be General when that pane is active. TextEdit.app uses the window title "Preferences". Which is a violation of: If your window doesn’t have multiple preference panes, then its title should be App Name Preferences. FaceTime.app uses the window title "Preferences". Which is a violation of: If your window doesn’t have multiple preference panes, then its title should be App Name Preferences. Keychain Access.app uses the window title "Preferences". Which is a violation of: If your window doesn’t have multiple preference panes, then its title should be App Name Preferences. Script Editor.app has the minimize button enabled. Which is a violation of: Disable the Minimize and Zoom buttons. Audio MIDI Setup.app has the minimize and zoom button enabled. Which is a violation of: Disable the Minimize and Zoom buttons. Xcode.app shows the tab as the window title, but then modifies the "Window" menu to show "Preferences - General", admitting that the tab title is not clear enough of a window title. Apple, as usual, has such a double standard. They expect third-party apps to follow their HIG, but they couldn't care less to follow it themselves. I have filed a Radar: https://openradar.appspot.com/radar?id=5029274855669760 Wow, good detective work :) I wonder what their response is going to be. I actually like how Xcode prepends "Preferences - " to the window title list best, just to add my 2¢. I actually like how Xcode prepends "Preferences - " to the window title list best, just to add my 2¢. Me too. Should we rename toolbarItemTitle to preferencePaneTitle now that we'll use it for both the toolbar item and the window title? Kinda makes sense, since it's not really a "toolbar item" when using the segmented controls. I didn't find a way to implement the nice addition of Preferences -- to the window list title. I am afraid this is a menu customization. We could offer a drop-in controller/service object that people tie to their window menu and that (1) removes the preference window from the regular window list, and (2) adds it as a custom entry above the regular list. After all, Xcode's window list does look different from e.g. Finder's: That's a strong indicator for me that they replaced the system default functionality. Or we set the window title to "Preferences -- (preferencePaneTitle)" and replace the titlebar's label with a custom one that drops the prefix. That actually makes more sense, as you would want to improved title in the window list and mission control too. But I think we should think about the problem a little bit more before implementing this, and it should obviously be opt-in. Inspecting Xcode windows, it actually does not use the Preferences -- prefix in the actual window title. Folks in a Slack proposed changes during menu validations.
2025-04-01T06:40:24.315221
2020-06-15T08:04:17
638633786
{ "authors": [ "ashwinvis", "maehr", "mourarthur", "sindresorhus" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10693", "repo": "sindresorhus/awesome", "url": "https://github.com/sindresorhus/awesome/pull/1798" }
gharchive/pull-request
Add Scientific Writing By @writing-resources @ashwinvis and @maehr Discussion can be found here https://github.com/sindresorhus/awesome/pull/1700 https://github.com/writing-resources/awesome-scientific-writing Open source tools for painless academic writing. This should be included, because academic writing is a hassle. PRs: https://github.com/sindresorhus/awesome/pull/1735 https://github.com/sindresorhus/awesome/pull/1635 https://github.com/sindresorhus/awesome/pull/1644 By submitting this pull request I confirm I've read and complied with the below requirements 🖖 Please read it multiple times. I spent a lot of time on these guidelines and most people miss a lot. Requirements for your pull request Don't waste my time. Do a good job, adhere to all the guidelines, and be responsive. You have to review at least 2 other open pull requests. Try to prioritize unreviewed PRs, but you can also add more comments to reviewed PRs. Go through the below list when reviewing. This requirement is meant to help make the Awesome project self-sustaining. Comment here which PRs you reviewed. You're expected to put a good effort into this and to be thorough. Look at previous PR reviews for inspiration. You have read and understood the instructions for creating a list. This pull request has a title in the format Add Name of List. ✅ Add Swift ✅ Add Software Architecture ❌ Update readme.md ❌ Add Awesome Swift ❌ Add swift ❌ Adding Swift ❌ Added Swift Your entry here should include a short description about the project/theme of the list. It should not describe the list itself. The first character should be uppercase and the description should end in a dot. It should be an objective description and not a tagline or marketing blurb. ✅ - [iOS](…) - Mobile operating system for Apple phones and tablets. ✅ - [Framer](…) - Prototyping interactive UI designs. ❌ - [iOS](…) - Resources and tools for iOS development. ❌ - [Framer](…) ❌ - [Framer](…) - prototyping interactive UI designs Your entry should be added at the bottom of the appropriate category. The suggested Awesome list complies with the below requirements. Requirements for your Awesome list Has been around for at least 30 days.That means 30 days from either the first real commit or when it was open-sourced. Whatever is most recent. Don't open a Draft / WIP pull request while you work on the guidelines. A pull request should be 100% ready and should adhere to all the guidelines when you open it. Run awesome-lint on your list and fix the reported issues. If there are false-positives or things that cannot/shouldn't be fixed, please report it. The default branch should be named main, not master. Includes a succinct description of the project/theme at the top of the readme. (Example) ✅ Mobile operating system for Apple phones and tablets. ✅ Prototyping interactive UI designs. ❌ Resources and tools for iOS development. ❌ Awesome Framer packages and tools. It's the result of hard work and the best I could possibly produce. If you have not put in considerable effort into your list, your pull request will be immediately closed. The repo name of your list should be in lowercase slug format: awesome-name-of-list. ✅ awesome-swift ✅ awesome-web-typography ❌ awesome-Swift ❌ AwesomeWebTypography The heading title of your list should be in title case format: # Awesome Name of List. ✅ # Awesome Swift ✅ # Awesome Web Typography ❌ # awesome-swift ❌ # AwesomeSwift Non-generated Markdown file in a GitHub repo. The repo should have awesome-list & awesome as GitHub topics. I encourage you to add more relevant topics. Not a duplicate. Please search for existing submissions. Only has awesome items. Awesome lists are curations of the best, not everything. Does not contain items that are unmaintained, has archived repo, deprecated, or missing docs. If you really need to include such items, they should be in a separate Markdown file. Includes a project logo/illustration whenever possible. Either centered, fullwidth, or placed at the top-right of the readme. (Example) The image should link to the project website or any relevant website. The image should be high-DPI. Set it to maximum half the width of the original image. Entries have a description, unless the title is descriptive enough by itself. It rarely is though. Includes the Awesome badge. Should be placed on the right side of the readme heading. Can be placed centered if the list has a centered graphics header. Should link back to this list. Has a Table of Contents section. Should be named Contents, not Table of Contents. Should be the first section in the list. Should only have one level of nested lists, preferably none. Has an appropriate license. We strongly recommend the CC0 license, but any Creative Commons license will work. Tip: You can quickly add it to your repo by going to this URL: https://github.com/<user>/<repo>/community/license/new?branch=master&template=cc0-1.0 (replace <user> and <repo> accordingly). A code license like MIT, BSD, Apache, GPL, etc, is not acceptable. Neither are WTFPL and Unlicense. Place a file named license or LICENSE in the repo root with the license text. Do not add the license name or text to the readme. GitHub already shows the license name at the top of the repo. To verify that you've read all the guidelines, please comment on your pull request with just the word unicorn. Has contribution guidelines. The file should be named contributing.md. Casing is up to you. Has consistent formatting and proper spelling/grammar. The link and description are separated by a dash. Example: - [AVA](…) - JavaScript test runner. The description starts with an uppercase character and ends with a period. Consistent and correct naming. For example, Node.js, not NodeJS or node.js. Doesn't include a Travis badge.You can still use Travis for list linting, but the badge has no value in the readme. Doesn't include an Inspired by awesome-foo or Inspired by the Awesome project kinda link at the top of the readme. The Awesome badge is enough. Go to the top and read it again. Do not add the license name or text to the readme. GitHub already shows the license name at the top of the repo. Do not add the license name or text to the readme. GitHub already shows the license name at the top of the repo. thank you @sindresorhus we fixed it accordingly The default branch should be named main, not master. This is a very recent adition to the guidelines, so I don't know how strict @sindresorhus wants to be about it when it comes to previously opened PRs. Doesn't hurt to change though 😃 @mourarthur Done Tweet: https://twitter.com/awesome__re/status/1274373790714048513
2025-04-01T06:40:24.318131
2020-07-17T19:03:42
659539372
{ "authors": [ "morenoh149" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10694", "repo": "sindresorhus/awesome", "url": "https://github.com/sindresorhus/awesome/pull/1826" }
gharchive/pull-request
Add Outsourcing https://github.com/morenoh149/awesome-outsourcing The business practice of hiring a party outside a company to perform services and create goods. It should be added because it provides valuable information if you are thinking about outsourcing software development. Where do you start, how does this process work, etc. I reviewed #1817 , #1816 and #1813 . I confirm I've read and complied with the awesome requirements (I made a new pr because I accidentally deleted the first fork, re https://github.com/sindresorhus/awesome/pull/1825) unicorn
2025-04-01T06:40:24.330902
2018-03-21T18:42:28
307371940
{ "authors": [ "IssuehuntBot", "ttamj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10695", "repo": "sindresorhus/global-dirs", "url": "https://github.com/sindresorhus/global-dirs/issues/3" }
gharchive/issue
New Yarn global directory on Windows With the release of Yarn 1.5.1, Yarn's global installation directory on Windows has changed. The global module storage path now includes "Data" such as AppData/Local/Yarn/Data/global/node_modules Reference: https://github.com/yarnpkg/yarn/pull/5336 This results in global-dirs reporting the incorrect global directory with Yarn 1.5.1 on Windows. Subsequently, is-installed-globally fails to report global installations with Yarn 1.5.1 on Windows. @issuehunt has funded $30.00 to this issue. Submit pull request via IssueHunt to receive this reward. Want to contribute? Chip in to this issue via IssueHunt. Checkout the IssueHunt Issue Explorer to see more funded issues. Need help from developers? Add your repository on IssueHunt to raise funds. @sindresorhus has rewarded $27.00 to @tiagodanin. See it on IssueHunt :moneybag: Total deposit: $30.00 :tada: Repository reward(0%): $0.00 :wrench: Service fee(10%): $3.00
2025-04-01T06:40:24.344636
2016-03-14T19:22:32
140773428
{ "authors": [ "Zertz", "sindresorhus" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10696", "repo": "sindresorhus/node-module-boilerplate", "url": "https://github.com/sindresorhus/node-module-boilerplate/pull/10" }
gharchive/pull-request
use travis-ci container-based infrastructure Allows for faster builds. See docs for details. No longer needed: For repos we recognize on or after 2015-01-01, linux builds are sent to our container-based infrastructure.
2025-04-01T06:40:24.355514
2021-07-02T17:21:16
935945146
{ "authors": [ "fregante", "kidonng", "kiprasmel", "yakov116" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10697", "repo": "sindresorhus/refined-github", "url": "https://github.com/sindresorhus/refined-github/issues/4536" }
gharchive/issue
fix contribution graph's time interval selection Visiting a profile on github and selecting a contribution year for the contribution graph sets the interval in the url query to the current month, or the last month of the given year. e.g. https://github.com/kiprasmel 2020: selects https://github.com/kiprasmel?tab=overview&from=2020-12-01&to=2020-12-31 - last month of 2020 2021 (current year): selects https://github.com/kiprasmel?tab=overview&from=2021-07-01&to=2021-07-02 - current month It seems the expected UX would be to select the whole year? One would need to change the year selection's <a href attribute and replace the given start date's month to the first month 01. I can try creating a PR if we agree on this fix. It seems the expected UX would be to select the whole year? Then it will need to load a year's worth of contributions, which can be very long. I don't think there is anything to "fix", but rather, you are asking for a "expand" button. Maybe we can extend infinite-scroll to support profile page. Or maybe we can extend infinite-scroll to support profile page. We can't it changes the URL on every load I tried, it doesn't work. If I set the requested dates it just loads January: https://github.com/fregante?tab=overview&from=2019-01-01&to=2019-12-31 In short, GitHub expects you to click the year and then go backwards, starting from the "latest month" of that year ("December" for past years or "the current month" for this year).
2025-04-01T06:40:24.367941
2022-07-13T11:45:06
1303317931
{ "authors": [ "efeyakinci" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10698", "repo": "single-spa/single-spa-react", "url": "https://github.com/single-spa/single-spa-react/issues/147" }
gharchive/issue
UI Frameworks Not Working Due to useExternalStore hook being undefined. Hi, I am attempting to use ChakraUI with Single-SPA, but I am unable to load ChakraUI, as the useExternalStore hook seems to be undefined. Issue was with the Single SPA playground using an older version of React and overriding the import.
2025-04-01T06:40:24.374784
2024-07-10T17:08:49
2401279395
{ "authors": [ "HarshadaGirase", "singodiyashubham87" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10699", "repo": "singodiyashubham87/Draw-it-out", "url": "https://github.com/singodiyashubham87/Draw-it-out/issues/465" }
gharchive/issue
[Feature Request]: Add Hover Effect on Chatbot Icon Is there an existing issue for this? [X] I have searched the existing issues Feature Description Requesting the addition of a hover effect on the chatbot icon to enhance user experience and provide visual feedback when the icon is interacted with. Use Case Adding a hover effect will make the chatbot icon more interactive and engaging, providing users with a clear indication that the icon is clickable and interactive. Benefits Benefits: Improved user engagement and interaction with the chatbot. Enhanced visual feedback indicating the icon's interactivity. A more polished and professional look for the chatbot feature. Add ScreenShots Priority High Record [X] I have read the Contributing Guidelines [X] I'm a GSSOC'24 contributor [X] I'm a SSOC'24 contributor [X] I want to work on this issue This can be done in #464 only, no need for a separate issue to add a hover effect.
2025-04-01T06:40:24.387338
2023-04-05T14:33:14
1655748036
{ "authors": [ "mohammadx0098", "sarahmish" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10700", "repo": "sintel-dev/Orion", "url": "https://github.com/sintel-dev/Orion/issues/405" }
gharchive/issue
TadGan multivariate => ValueError: Axis 2 is outside the dimensions of X (1) Orion version: 0.4.1 Python version: 3.7.16 Operating System: windows Description I am writing to seek your assistance with an issue I have been encountering while using the orion-ml package. I am trying to use the package to perform multi-variable anomaly detection on my dataset, which contains data sampled at a frequency of 20 Hz for a period of 1 month, resulting in 72000 data points per second. The dataset consists of 30 variables, and I would like to perform anomaly detection on 1-hour windows of the data. data format: timestamp,0,1,2,3 1220000000,0.001714,-0.000164,0.002266,0.000921 1220000003,0.001731,-0.00018,0.002233,0.000516 1220000006,0.001731,-0.000213,0.002266,0.001423 1220000009,0.001714,-0.000213,0.00225,0.000872 . . . What I Did To do this, I am using the following code: from orion import Orion hyperparameters = { "mlprimitives.custom.timeseries_preprocessing.rolling_window_sequences#1": { 'window_size': 72000, }, 'orion.primitives.tadgan.TadGAN#1': { 'epochs': 5, 'verbose': True, 'input_shape': [72000, 30], } } orion = Orion( pipeline='tadgan', hyperparameters=hyperparameters ) orion.fit(data) However, when I run this code, I receive the following error message: "ValueError: Axis 2 is outside the dimensions of X (1)." I have tried to troubleshoot this error on my own, but I have been unable to identify the cause of the issue. I was hoping that you could help me understand what is causing this error and how I can resolve it. Additionally, I have a few questions regarding the functionality of the orion-ml package that I hope you could help me with. Hi @mohammadx0098! Thank you for using Orion! The issue that you are seeing is because we need to adjust the interval hyperparameter. I noticed that time between one timestamp and another is 3 seconds so I would set interval=3. hyperparameters = { "mlprimitives.custom.timeseries_preprocessing.time_segments_aggregate#1": { 'interval': 3 }, "mlprimitives.custom.timeseries_preprocessing.rolling_window_sequences#1": { 'window_size': 72000, }, 'orion.primitives.tadgan.TadGAN#1': { 'epochs': 5, 'verbose': True, 'input_shape': [72000, 30], } } Let me know if this solves your issue @sarahmish thanks. my problem is solved . I wonder to know if your framework could support multivariate signals? like this dataset with 30 variables. you have mentioned before that your framework can just do it for one target column output. it means that I should run my code for each target column or not? Great! Yes, precisely. Orion currently only supports detecting anomalies in one target column, therefore, if you would like to detect anomalies in multiple columns, you need run a pipeline for each target column. Great! Yes, precisely. Orion currently only supports detecting anomalies in one target column, therefore, if you would like to detect anomalies in multiple columns, you need run a pipeline for each target column. so I should add these lines for that? target_column and target_shape "mlprimitives.custom.timeseries_preprocessing.rolling_window_sequences#1": { **'target_column': 0** }, 'orion.primitives.tadgan.TadGAN#1': { 'epochs': 5, 'verbose': True, 'input_shape': [100, 25], **'target_shape': [100, 1],** } Almost. You will need to go through the columns you have. So in the first run "mlprimitives.custom.timeseries_preprocessing.rolling_window_sequences#1": { 'target_column': 0 } In the second run, you will change it to the second column "mlprimitives.custom.timeseries_preprocessing.rolling_window_sequences#1": { 'target_column': 1 } and so forth thanks :)
2025-04-01T06:40:24.408937
2016-11-11T16:18:38
188795224
{ "authors": [ "AlekSi", "sirupsen" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10701", "repo": "sirupsen/logrus", "url": "https://github.com/sirupsen/logrus/issues/443" }
gharchive/issue
Please update changelog for v0.11 https://github.com/Sirupsen/logrus/blob/master/CHANGELOG.md wasn't updated. Done!
2025-04-01T06:40:24.412782
2019-02-10T22:02:42
408585954
{ "authors": [ "divideby2", "maxiaoba" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10702", "repo": "sisl/AdaptiveStressTestingToolbox", "url": "https://github.com/sisl/AdaptiveStressTestingToolbox/issues/9" }
gharchive/issue
missing training file while running TestCases scripts File "CartPole_MCTSRS.py", line 61, in data = joblib.load("Data/Train/itr_50.pkl") File "/usr/local/anaconda3/lib/python3.6/site-packages/joblib/numpy_pickle.py", line 590, in load with open(filename, 'rb') as f: FileNotFoundError: [Errno 2] No such file or directory: 'Data/Train/itr_50.pkl' Fixed.