added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:39:55.943526
2022-09-14T09:02:48
1372621538
{ "authors": [ "cavasinf", "oyejorge" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9400", "repo": "orchidjs/tom-select", "url": "https://github.com/orchidjs/tom-select/issues/469" }
gharchive/issue
Plugins clear button and remove button don't work well together Describe the bug When using clear button and remove button plugins together with a multiple select row, clear button is not well placed and cannot be differentiate with other remove buttons. To Reproduce Have clear button enabled Have remove button enabled Select multiple items Have select with multiple lines Hover the select Expected behavior The clear button should more recognizable than the remove button. It should be as the same height as the drop down caret. Maybe clear button needs his own place that item couldn't take ? Green lines are safe zone for the clear button: Select items will never cross that line Dropdown caret should not cross that line Additional context OS: Windows Browser: Firefox Version: 2.0.3 Bootstrap: 5 Theme/Template: www.Tabler.io **To Reproduce** Create an example on JSFiddle, CodePen or similar service and outline the steps for reproducing the bug. Here is a demo : https://jsfiddle.net/florian_allsoftware/f0wkpyd9/8/ Using : Symfony (only for class naming) Tabler.io (Template) TomSelect Make it like that. Symfony -> Add form-select class to select Tabler.io -> Stylize form-select by adding the caret at the end. TomSelect -> Don't use the caret because there's not the class single (multi currently), but Tabler.io added it anyway.
2025-04-01T06:39:55.947603
2021-02-15T22:13:31
808848340
{ "authors": [ "orcmid" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9401", "repo": "orcmid/docEng", "url": "https://github.com/orcmid/docEng/issues/3" }
gharchive/issue
Scavenge About/Construction and Demonstration of docs/ Publishing The Miser Project has an effort to spiral the construction of of its docs/ and the understanding of the default template and other aspects of the mapping between docs/ and the published orcmid.github.io/miser pages. Bring those components here and then set those up. The use of spiraling may need to be explained. Add that to the README.md page for the project. Check out GitHub Community Guidelines I need to find the screen captures that go with documenting the plain case. I also need to find the MarkDown cheat-sheet that I was using to work through the cases.
2025-04-01T06:39:55.948985
2024-03-18T11:32:51
2191955828
{ "authors": [ "casey", "nowherekai" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9402", "repo": "ordinals/ord", "url": "https://github.com/ordinals/ord/pull/3307" }
gharchive/pull-request
ignore invalid instruction of rune commit tx fix https://github.com/ordinals/ord/issues/3306 We get transactions from bitcoin core, so I don't think it's possible to encounter a transaction with an invalid instruction. Have you found a case where it is possible?
2025-04-01T06:39:55.978759
2015-04-08T13:51:22
67135243
{ "authors": [ "luigidellaquila", "tglman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9404", "repo": "orientechnologies/orientdb", "url": "https://github.com/orientechnologies/orientdb/issues/3892" }
gharchive/issue
Traverse in some case with additinal condition return more value than without. As title @tglman do you still have the test case...? fixed on commit 35ed13f8a68fd571bda2a7b19df0de2b0cdc34a7 develop branch. The issue was related to field deserialization
2025-04-01T06:39:55.984484
2015-08-07T07:33:06
99596161
{ "authors": [ "luigidellaquila", "lvca", "matanshukry", "nagarajasr", "randikaf" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9405", "repo": "orientechnologies/orientdb", "url": "https://github.com/orientechnologies/orientdb/issues/4751" }
gharchive/issue
Functions with duplicate names Orientdb can have function with the same name but while retrieving the functions from java api, using meta data as below, it needs the function name. How can I know which function is getting retrieve from the below code? Is there way to retrieve the function using its RID? db.getMetadata().getFunctionLibrary().getFunction("sum"); Are you sure orientdb can have a function with the same name? As far as I can tell from the code, it's not possible. @randikaf what's the purpose of having 2 functions with the same name? @matanshukry , Yes, if you try from the Orientdb studio, you'll be able to create. @lvca Actually this happened accidentally because of a issue in my code. But the problem is, Orientdb does not validate the function creation with the same name. Can I use the function name to retreive the function consistently? @randikaf you're right, my bad; Was looking at a different function. @lvca - IMO, function names should be unique. Now I'm not too familiar with indices, but we can create (by default) a unique index on OFunction.name, that should be simple enough. What do you think? @matanshukry , @lvca Are you planning to fix this? i also observed that when importing the functions only from an export file using '-merge' option, duplicate function entries are created. not sure if creating index on OFunction will solve the problem. i suspect that it would need a change to import command to 'overwrite' the function in case of import. @nagarajasr - I am not familiar with the import process, but doesn't the process add entries to the OFunction table through the index? That is, shouldn't a unique index throw error on duplicate functions? I think it's the best option since even if you fix the import function, one can still create multiple functions with the same name, which I don't think make much sense. Unless you're planning an using parameters overloading, but then you'll need a more complex unique process. @randikaf - I can take a look at it, just looking for some confirmation from the Orient team; They need to decide how they want to fix it (unique index? overwrite? else? ..) +1 for the unique index (in v 2.2 IMHO) @matanshukry yes, but i would expect that the import process should overwrite an existing function if the "-merge" option is specified
2025-04-01T06:39:55.987294
2015-11-13T09:45:22
116734627
{ "authors": [ "laa", "lvca" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9406", "repo": "orientechnologies/orientdb", "url": "https://github.com/orientechnologies/orientdb/issues/5309" }
gharchive/issue
Rewrite functionality of tracking of changes inside of atomic operation using RoaringBitmaps In order to track changes on pages inside of atomic operation we need some form of implementation of sparse bitset. Typically data structures proposed to be us as sparse bitset use very poor sequential performance for example to check whether n-bit is set we need to use algorithm with O(n) complexity which is unacceptable. Also we can not use plain byte array because it means that in order to change one page in page we need to consume 2 * page_size bytes and store in WAL page_size bytes for single byte change. That is why we used augmented rb-tree for tracking of changes in one dimensional intervals but this data structure is very resource consuming in case of big transactions and also consumes more space than RoaringBitmaps implementation. For details of data structure look at http://arxiv.org/pdf/1402.6407v9.pdf . +1 Not needed any more
2025-04-01T06:39:55.989292
2015-04-27T13:08:50
71278529
{ "authors": [ "timfam", "vonwao" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9407", "repo": "orionjs/orion", "url": "https://github.com/orionjs/orion/issues/119" }
gharchive/issue
Telescope and Orion Has anyone looked at how to possibly integrate Orion with Telescope? (point the telescope towards Orion, lol). You may ask "why", I'm building a community-based platform and instead of reinventing the wheel I would rather just reuse some of that Telescope code. What I imagine is replace the security, user, permission system of telescope with Orion's, because I want the nice configurable admin panel that Orion offers. @vonwao point the telescope team to Orion and sell them on the benefit, be an Orion evangelist :) Yes, good idea :) I decided to delete/close this issue for now because I thought I should do some more research on this for now and not introduce too much noise into the issues with new ideas.
2025-04-01T06:39:56.002205
2017-02-02T22:28:47
205016100
{ "authors": [ "ahsanwtc", "dbrrt", "rwaness", "simsim0709" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9408", "repo": "orionsoft/meteor-apollo-accounts", "url": "https://github.com/orionsoft/meteor-apollo-accounts/issues/42" }
gharchive/issue
initAccounts is not a function Hello, I'm trying to use meteor-apollo-accounts, with the tutorial http://dev.apollodata.com/core/meteor.html and your README.md. So now I have : import { createApolloServer } from 'meteor/apollo'; import { makeExecutableSchema, addMockFunctionsToSchema } from 'graphql-tools'; import {initAccounts} from 'meteor/nicolaslopezj:apollo-accounts' import {loadSchema, getSchema} from 'graphql-loader' import { typeDefs } from '/imports/api/schema'; import { resolvers } from '/imports/api/resolvers'; const options = {} // Load all accounts related resolvers and type definitions into graphql-loader initAccounts(options) // Load all your resolvers and type definitions into graphql-loader loadSchema({typeDefs, resolvers}) // Gets all the resolvers and type definitions loaded in graphql-loader const schema = getSchema() const executableSchema = makeExecutableSchema(schema) createApolloServer({ executableSchema, }); when I launch the application, my server crash : W20170202-17:22:45.325(-5)? (STDERR) Note: you are using a pure-JavaScript implementation of bcrypt. W20170202-17:22:45.325(-5)? (STDERR) While this implementation will work correctly, it is known to be W20170202-17:22:45.326(-5)? (STDERR) approximately three times slower than the native implementation. W20170202-17:22:45.326(-5)? (STDERR) In order to use the native implementation instead, run W20170202-17:22:45.326(-5)? (STDERR) W20170202-17:22:45.327(-5)? (STDERR) meteor npm install --save bcrypt W20170202-17:22:45.327(-5)? (STDERR) W20170202-17:22:45.327(-5)? (STDERR) in the root directory of your application. W20170202-17:22:45.348(-5)? (STDERR) WARNING: npm peer requirements (for apollo) not installed: W20170202-17:22:45.348(-5)? (STDERR) -<EMAIL_ADDRESS>installed<EMAIL_ADDRESS>needed W20170202-17:22:45.349(-5)? (STDERR) -<EMAIL_ADDRESS>installed<EMAIL_ADDRESS>|| ^0.8.0 needed W20170202-17:22:45.349(-5)? (STDERR) -<EMAIL_ADDRESS>installed<EMAIL_ADDRESS>needed W20170202-17:22:45.349(-5)? (STDERR) W20170202-17:22:45.350(-5)? (STDERR) Read more about installing npm peer dependencies: W20170202-17:22:45.350(-5)? (STDERR) http://guide.meteor.com/using-packages.html#peer-npm-dependencies W20170202-17:22:45.350(-5)? (STDERR) W20170202-17:22:45.639(-5)? (STDERR) C:\Users\Erwan\AppData\Local\.meteor\packages\meteor-tool\1.4.2_3\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\fibers\future.js:280 W20170202-17:22:45.639(-5)? (STDERR) throw(ex); W20170202-17:22:45.639(-5)? (STDERR) ^ W20170202-17:22:45.640(-5)? (STDERR) W20170202-17:22:45.640(-5)? (STDERR) TypeError: initAccounts is not a function W20170202-17:22:45.641(-5)? (STDERR) at server/main.js:19:2 W20170202-17:22:45.641(-5)? (STDERR) at Function.time (C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\profile.js:301:28) W20170202-17:22:45.641(-5)? (STDERR) at C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\boot.js:304:13 W20170202-17:22:45.642(-5)? (STDERR) at C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\boot.js:345:5 W20170202-17:22:45.642(-5)? (STDERR) at Function.run (C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\profile.js:480:12) W20170202-17:22:45.642(-5)? (STDERR) at C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\boot.js:343:11 => Exited with code: 1 => Your application is crashing. Waiting for file change. I really don't understand. Is the tutorial up-to-date ? thanks in advance, and good job 👍 I really need it ^^ Sorry for the delay, initAccounts might not be exported, or maybe not exist in the version of apollo-accounts you have installed. I can help you with that. @dbrrt thanks for you answer. So what should I do ? Figure out why "TypeError: initAccounts is not a function" Checking if there's no conflicts between packages versions (graphlql/graphql-express/graphql-tools) I'm running this repository's code on a WIP Apollo/GraphQL based app, so it's working. Even tried with React Native. I am also getting the same issue. Did you manage to find the solution? @ahsanwtc Here's my server.js file, actually I'm using initAccounts not directly the Meteor library but it should work in both cases import {makeExecutableSchema} from 'graphql-tools' import {loadSchema, getSchema} from 'graphql-loader' import {initAccounts} from '/imports/lib/accounts-gql/server' import typeDefs from './schema' import resolvers from './resolvers' import {createApolloServer} from 'meteor/orionsoft:apollo' import cors from 'cors' const options = {} // Load all accounts related resolvers and type definitions into graphql-loader initAccounts(options) // Load all your resolvers and type definitions into graphql-loader loadSchema({typeDefs, resolvers}) // Gets all the resolvers and type definitions loaded in graphql-loader const schema = getSchema() const executableSchema = makeExecutableSchema(schema) createApolloServer({ schema: executableSchema }, { configServer (graphQLServer) { graphQLServer.use(cors()) } }) @dbrrt thanks for the reply. Can you give the the full path to the initAccounts? I am not able to find the lib folder. It's not able to find the path '/imports/lib/accounts-gql/server' I "repacked" the libraries (client and server), under the lib path, so that's just my project. But I think that if you can't use initAccounts, that's because of an export or something like that. thank you for your feed back. I already share my code... all is here (juste a test to integrate apollo in an other app) In fact there is dependencies issue... I will investigate but i'm pretty sure it's not the problem. If you can (of course), I'm very curious about your test result concerning the meteor initAccount method. Thank a lot ;) @rwaness Here's an extract of my packages.json, if that can help: "dependencies": { "apollo-client": "^0.5.0", "babel-runtime": "^6.20.0", "bcrypt": "^0.8.7", "body-parser": "^1.16.0", "classnames": "^2.2.5", "cors": "^2.8.1", "express": "^4.14.0", "graphql": "^0.7.0", "graphql-loader": "^1.0.1", "graphql-server-express": "^0.4.3", "graphql-tools": "^0.8.0", "graphql-typings": "0.0.1-beta-2", "invariant": "^2.2.1", "meteor-node-stubs": "~0.2.0", "moment": "^2.17.1", "node-sass": "^3.13.1", "normalize.css": "^5.0.0", "react": "^15.3.1", "react-addons-css-transition-group": "~15.4.0", "react-addons-pure-render-mixin": "^15.3.1", "react-apollo": "^0.5.16", "react-dom": "^15.3.1", "react-komposer": "^1.13.1", "react-mounter": "^1.2.0", "react-redux": "^5.0.2", "react-router": "^3.0.2", "react-slick": "^0.14.6", "redux": "^3.6.0", "semantic-ui-css": "^2.2.4", "semantic-ui-react": "^0.64.3" }, "devDependencies": { "babel-plugin-module-resolver": "^2.4.0", "babel-plugin-transform-class-properties": "^6.19.0", "babel-plugin-transform-decorators-legacy": "^1.3.4", "semantic-ui": "^2.2.7", "standard": "^8.5.0" } and I didn't have time for now to reproduce your issue with initAccount (directly using the Meteor lib), I'll try ASAP @rwaness have you solved the problem? I think that's because the package version installed in your meteor app. You need to specify the version of this package. If not, 1.0.1 version is installed instead. Try this: meteor add<EMAIL_ADDRESS> All good with initAccount now. Thanks to you. Unfortunatly, I found a second issue. I follow this tutorial : https://blog.orionsoft.io/using-meteor-accounts-with-apollo-and-react-df3c89b46b17 and when I want to test the application, the server crash and say : TypeError: Auth is not a function at meteorInstall.server.schema.Mutation.index.js (server/schema/Mutation/index.js:5:4) at fileEvaluate (packages\modules-runtime.js:197:9) at require (packages\modules-runtime.js:120:16) at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\app\app.js:220:1 at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\boot.js:303:34 at Array.forEach (native) at Function._.each._.forEach (C:\Users\erwan\AppData\Local\.meteor\packages\meteor-tool\1.4.3_2\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11) at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\boot.js:128:5 at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\boot.js:352:5 at Function.run (C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\profile.js:510:12) The file server/schema/Mutation/index.js looks like : import {SchemaMutations as Auth} from 'meteor/nicolaslopezj:apollo-accounts' export default ` type Mutation { ${Auth()} } ` Is it again a version issue ? How can we know the good version to use ? Good job guys. Continue ;) @rwaness Yeah. I think that api version is old. It has changed. You better see this install guide. Or, mine is like below: import { createApolloServer } from 'meteor/apollo'; import { initAccounts } from 'meteor/nicolaslopezj:apollo-accounts'; import { loadSchema, getSchema } from 'graphql-loader'; import { makeExecutableSchema } from 'graphql-tools'; import cors from 'cors'; import { typeDefs } from './schema'; import { resolvers } from './resolvers'; initAccounts({}); loadSchema({ typeDefs, resolvers }); const schema = makeExecutableSchema(getSchema()); export default () => { createApolloServer({ schema }, { configServer(graphQLServer) { graphQLServer.use(cors()); }, }); }; Yes, it works with the boilerplate : https://github.com/orionsoft/server-boilerplate Thanks ;)
2025-04-01T06:39:56.010095
2016-08-04T06:02:10
169297634
{ "authors": [ "MarcusHurney", "mauriciord", "orktes", "pillowface" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9409", "repo": "orktes/atom-react", "url": "https://github.com/orktes/atom-react/issues/184" }
gharchive/issue
react highlighting and autocomplete not working hi yesterday it works but now it doesn't work. Why is that? Can you help me fix it I'm experiencing the same problem, you aren't alone! SAme problem here I installed the package called language-babel. That works perfectly fine. Another solution is to use this package https://atom.io/packages/language-javascript-jsx. Both language-babel and language-javascript-jsx will solve this issue until the react package is fixed. Make sure you just use one of these, disable the others. I tried that, but not autocompleting yet. Atenciosamente, Maurício R. Duarte. 2016-08-04 20:52 GMT-03:00 Marcus Hurney<EMAIL_ADDRESS> I installed the package called language-babel. That works perfectly fine. Another solution is to use this package https://atom.io/packages/ language-javascript-jsx. Both language-babel and language-javascript-jsx will solve this issue until the react package is fixed. Make sure you just use one of these, disable the others. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/orktes/atom-react/issues/184#issuecomment-237719916, or mute the thread https://github.com/notifications/unsubscribe-auth/ABE4_ciLSTMyhyUENfBPcQvFhqY2tvgsks5qcntSgaJpZM4JcXfU . I recomend using language-babel https://atom.io/packages/language-babel It has similiraties to react package since it has autocomplete and highlighting. The othe package doesn't have autocomplete. I agree, not having autocomplete is a big disadvantage of the jsx language package. On Friday, August 5, 2016, pillowface<EMAIL_ADDRESS>wrote: I recomend using language-babel https://atom.io/packages/language-babel It has similiraties to react package since it has autocomplete and highlighting. The othe package doesn't have autocomplete. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/orktes/atom-react/issues/184#issuecomment-237870544, or mute the thread https://github.com/notifications/unsubscribe-auth/AMusLgj98qwACBpHSgw7Hglx12Q_wiWmks5qc00_gaJpZM4JcXfU . Should be fixed https://github.com/orktes/atom-react/issues/188
2025-04-01T06:39:56.061441
2023-12-24T06:03:54
2054995864
{ "authors": [ "avaneev", "orlp" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9410", "repo": "orlp/polymur-hash", "url": "https://github.com/orlp/polymur-hash/issues/8" }
gharchive/issue
Universal? Universal hashing implies N/M collision estimate for all collision test sets, and not (N^2)/M/2 like can be seen in SMHasher tests for Polymur. Just saying. https://www.cs.cmu.edu/~avrim/451f11/lectures/lect1004.pdf The universal hash function for strings is a unicorn (that is, it doesn't exist). Any fixed-memory universal hash function must be almost-universal. Since no one makes a hash function that allocates 2^64 bytes of memory to store its data, it is implied that any hash function which hashes strings is therefore almost-universal. When I say that PolymurHash is "a universal hash function" in the introduction I don't mean to claim that it holds the "Universal" property over the set of all strings, rather that it is a hash function with universality claims. I immediately follow it up one sentence later with the exact collision probabilities claimed. For example the wikipedia article Poly1305 hash function starts with "Poly1305 is a universal hash family". Since Poly1305 hashes strings and also doesn't allocate tons of memory we see that in fact, its collision probability scales with the input size. Well, it would be fair to call it "almost universal hash function" then. Or most authors of "good" SMHasher hash functions could come up with proofs. "Universal hashing" is a brand, and it's a bit unfair to exploit it not actually having an intention to prove the "original" universality requirement. Or most authors of "good" SMHasher hash functions could come up with proofs. The proofs of what I claim are in this repository. "Universal hashing" is an academic brand, and it's a bit unfair to exploit it not actually having an intention to prove the "original" universality requirement. I don't exploit anything, you simply take the most unreasonably strict reading of the word "Universal" that no real-world hash function for strings ever satisfies, and then try to apply it to a hash function over strings. Almost-universality isn't something I just came up with. It is just as academic. It has formal definitions. For example the PolyR hash function paper also opens with "We describe a universal hash-function family, PolyR". Guess what? Yes, that is also colloquial and also used to mean almost-universality. Universal hashing implies N/M collision estimate for all collision test sets, and not (N^2)/M/2 like can be seen in SMHasher tests for Polymur. Also, this is a misunderstanding of Polymur's security claim. Polymur doesn't have a "N^2/M/2" collision probability, assuming we use N to mean the number of strings in the input space, and M to mean 2^64 like in the paper you linked. Polymur's collision probability bound scales with the length of the input string, not the number of strings in your input set. This is an exponential difference, there are 256^1024 binary strings of length 1024, yet Polymur's collision probability only goes up to 256 * 2^-60.2. I have no idea where you got the N^2 term from, or how you can claim that it "can be seen in SMHasher tests". Perhaps you misunderstood the claim of the lecture notes? Note that they count the expected number of collisions of a specific string x with all other strings, not the total number of collisions over an entire data set. Note that they count the expected number of collisions of a specific string x with all other strings to get to N/M, not the total number of collisions over an entire data set. That is also on the order of N^2/M/2 for a perfect universal hash. That is simply basic probability and has nothing to do with how good your hash function is. I would not handle N^2/M/2 as something "simple". For example, (let's call it) "set's collision estimate" of CRC32 is way lower than N^2/M/2 in many SMHasher tests yet its collision estimate for isolated strings is likely N/M. I just do not see how collision estimate of a specific string is good, if it's not easily translatable to set's collision estimate. The situation you are probably overlooking is that when you add a new string to an already constructed set with N^2/M/2 total collisions, the collision estimate of a new string cannot be N/M. Either I'm misunderstanding something or "universal hashing" concepts are very poorly defined. PolyR paper defines a collision estimate like for cryptographic hashes, which are never treated as "universal" to my knowledge. With such liberal treatment of collision estimate, "universality" claim is a bit moot. N^2/M/2 is a formula used by SMHasher to calculate expected number of collisions. If you look closer, it's derivative of set's collision estimate which is N/M. But there's no mention of collision estimate's derivatives in "universal hashing". Either I'm misunderstanding something or "universal hashing" concepts are very poorly defined. I think you are misunderstanding, because they are very strictly defined. The collision probability always talks about two arbitrary but different strings. You only talk about more than two strings once you get into k-independence for k > 2. I just do not see how collision estimate of a specific string is good, if it's not easily translatable to set's collision estimate. It generally translates pretty well. PolyR paper defines a collision estimate like for cryptographic hashes, which are never treated as "universal" to my knowledge. The latter has always been based on universal hash function theory (PolyR, Poly1305, Galois Counter Mode, UMAC, etc are all (almost-)universal hash families). With such liberal treatment of collision estimate, "universality" claim is a bit moot. It is not liberal at all, it is very precise. N^2/M/2 is a formula used by SMHasher to calculate expected number of collisions. Expected number of collisions between ALL strings, not the expected number of collisions between ONE and all other strings. That is what the lecture notes you linked compute. Your original statement "universal hashing implies N/M collision estimate for all collision test sets, and not (N^2)/M/2 like can be seen in SMHasher tests" is false, because that is mathematically impossible. It's also not what the linked lecture notes claim. If you look closer, it's derivative of set's collision estimate which is N/M. But there's no mention of collision estimate's derivatives in "universal hashing". I have no idea what you mean by that. In "universal hashing", N/M collision estimate already implies a *set S from U. So, of course, I'm talking about more than two strings in a set. What's the purpose of talking about 2 strings in all sets? By N/M derivative I mean that if you keep adding strings to a set, yielding sets with N+1, N+2... strings, addition increases set's collision estimate by N/M after each operation, if set's current collision estimate is N^2/M/2. In "universal hashing", N/M collision estimate already implies a set S from U. So, of course, I'm talking about more than two strings in a set. What's the purpose of talking about sets having only 2 strings each? Basic universal hashing is only talking about pairs of strings. Look, in the very lecture notes you linked: The universality bound is for ONE pair of strings x != y. The expectation of N/M is then calculated from the bound on pairs: What's the purpose of talking about sets having only 2 strings each? The purpose is that it's possible to analyze and prove things about 2 strings at a time, after which you can use statistics to extrapolate that behavior to sets of strings. By N/M derivative I mean that if you keep adding strings to a set, yielding sets with N+1, N+2... strings, addition increases set's collision estimate by N/M after each operation, if set's current collision estimate is N^2/M/2. Can you please give a clear definition of what you mean by "set's collision estimate"? "set's collision estimate" is what "estimated total number of collisions" equals to, in all SMHasher tests. Why the paper you've quoted talks about "linearity of expectations", on what premises? "set's collision estimate" is what "estimated total number of collisions" equals to, in all SMHasher tests. Yes, universal hashing makes no direct claims about that. Any bounds on that are entirely statistical, derived from the bound on pairs of strings. Why the paper you've quoted talks about "linearity of expectation", on what premises? https://math.stackexchange.com/questions/1810365/proof-of-linearity-for-expectation-given-random-variables-are-dependent Also this isn't a "paper you've quoted", this is from the lecture notes you linked in your initial post. It's from https://www.cs.cmu.edu/~avrim/451f11/lectures/lect1004.pdf. Yes, partially it was my misunderstanding. The set's collision estimate is emergent and not really definable while "universality" constraints are much simpler. But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, because without that the Pr(x and y collide)<1/M bound cannot be considered proven. The set's collision estimate is emergent Exactly. and not really definable while "universality" constraints are much simpler. There are also concepts of universality which do go beyond traditional universality, those are (almost-)k-independent hash function families. They are saying that if you have a set of k elements, the probability their hashes are any particular k-tuple is bounded. To my knowledge we only have practical ways of constructing k-independent hash functions for relatively small k, and only for fixed-size inputs, not arbitrary-length strings when k > 2. But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, because without that the Pr(x and y collide)<1/M bound cannot be considered proven. As I said, Polymur is not exactly 1/M (where M = 2^64), but rather claims Pr(x and y collide) <= n * 2^-60.2 where n = len(x). The proof for that is here: https://github.com/orlp/polymur-hash/blob/master/extras/universality-proof.md. I mean the proof that for all values of set S from U, the permutation produces a set of pseudo-random values - values that look like a random oracle, if observer does not know original values of S. That's a stronger property than regular universality, but luckily Polymur claims to be almost-2-independent, which is a stronger version of this anyway. Polymur claims that pairs of values look like random variables. That is, Polymur claims that if H is chosen independently at random from the family that for ANY m != m' and ANY x, y that Pr[H(m) = x && H(m') = y] <= n * 2^-124.2 You can just ignore one of the two variables and assume it collides to get a bound of n * 2^-60.2 on any particular output. Note that again we prove the chance of any specific output is low, for a single input (or a single pair). A claim about "a set of pseudo-random values" where that set would have > 2 elements would again be an emergent probability from pulling multiple random numbers. Also, But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, because without that the Pr(x and y collide)<1/M bound cannot be considered proven. That claim is not true. Consider a 64-bit hash function with a perfect 2^-64 collision bound. I can take that hash function, set the lowest bit to 0 and use that as a new hash function. It's now perfectly valid to claim this new function has a 2^-63 bound. But obviously the output is not uniformly random. I'm maybe overlooking something in your proof, but I do not see how your construction leads to pair-wise independence (50% difference in bits) of outputs. Maybe you are referring to some "common knowledge" I have not read/can't remember at the moment? I do not argue with PolyMur's practical performance, but a proof is something more formal. But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, [because without that the Pr(x and y collide)<1/M bound cannot be considered proven]. Consider a 64-bit hash function with a perfect 2^-64 collision bound. I can take that hash function, set the lowest bit to 0 and use that as a new hash function. It's now perfectly valid to claim this new function has a 2^-63 collision bound. But obviously the output is not uniformly random. Uniform randomness is a statistical property. 1/M bound is emergent from probability theory. If bin A is used (among M bins), the probability the next independent random number will also refer to bin A is 1/M. I'm maybe overlooking something in your proof, but I do not see how your construction leads to pair-wise independence (50% difference in bits) of outputs. That is a different definition of pairwise independence. The definition for pairwise independence in universal hashing can be seen on Wikipedia: It's also called strong universality, in our case almost-strong universality. Uniform randomness is a statistical property. 1/M bound is emergent from probability theory. If bin A is used (among M total bins), the probability the next independent random number will also refer to bin A is 1/M. Sure, but as my counter-example showed, low chance of collision does not imply every bit of the output looks random. In the most extreme case, consider hashing 32-bit numbers to 32-bit numbers. The identity function has 0 collisions, but obviously is not random. Ok, understood about the pairwise independence. Can you comment how collision bound in the proof is useful if k when hashing a set of values is usually unchanging. In such condition, I can only rely on message's properties, not the key. Referring to (14 + n/7) / |K|. As for zeroing the lowest bit example which reduces collision expectation, the issue with the example is that while collision expectation will change, and seemingly won't break anything, an initially random function will not be strictly uniformly-random anymore, and so probability theory won't be applicable. Strictly speaking, you can't use 1/M pair-wise collision estimate on a variable that was not previously shown to be uniformly-random. That for myself, maybe you do not look at things this way. Can you comment how collision bound in the proof is useful if k when hashing a set of values is usually unchanging. The point is that k is chosen uniformly at random independently from the values. As long as that holds, the bound is valid. At that point seeing k is "unchanging" doesn't matter. Since we assume the key was chosen independent from the value it doesn't matter whether k was chosen just now or ages ago - the choice was independent. This does mean it's not secure to leak the hash values to an attacker, please read this section. If an attacker can see the hash values, he can potentially construct new values which are much more likely to collide. Strictly speaking, you can't use 1/M pair-wise collision estimate on a variable that was not previously shown to be uniformly-random. That for myself, maybe you do not look at things this way. Well, that's just mathematically wrong. Each input behaving as if being uniformly random is a sufficient but not necessary condition for collision resistance. Consider a 16-bit to 16-bit identity hash function. It is not random at all but it literally can not create collisions. This isn't a trick either, it means if your input is 16 bits and you're building a hash table with 2^16 slots you can just use the identity hash function and never get collisions or the problems that come with them. That specific kind of hash table has a useful name: an array. Well, events of collisions depend on variations in messages or keys. f = k^7 * (f + m[6]) implies f depends on both message and key at the same time. If key is fixed in a set, then only messages affect events of collisions, functionally. As for myself, I cannot accept the collision bound for the case of a fixed key. Consider a 16-bit to 16-bit identity hash function. It is not random at all but it literally can not create collisions. This isn't a trick either, it means if your input is 16 bits and you're building a hash table with 2^16 slots you can just use the identity hash function and never get collisions or the problems that come with them. That specific kind of hash table has a useful name: an array. I'm assuming an identity hash is non-random and it cannot be "universal" by definition. Then its collision expectation is knowingly 0, no need for a bound. As for myself, I cannot accept the collision bound for the case of a fixed key. There is no collision bound for a fixed key at all. That's not how universal hash functions work. The bound is always assuming the hash function is picked at random from the hash function family. This is fundamental. There is no collision bound for a fixed key at all. That's not how universal hash functions work. The bound is always assuming the hash function is picked at random from the hash function family. This is fundamental. But how this is practical? When one fills a structure with values the key is always fixed. Otherwise value lookup is just impossible. Again, SSL is secured using this assumption - the key is chosen randomly and independently from the values, and the attacker never gets to see the hashes directly. SSL is a stream cipher beside KEM; this is unrelated to hashing. But how this is practical? When one fills a structure with values the key is always fixed. Otherwise value lookup is just impossible The key isn't fixed. The key is chosen randomly at startup when the hash function is initialized independently from the values. SSL is a stream cipher beside KEM; this is unrelated to hashing. SSL includes authentication of the data. Authentication in 2023 is typically done with with Galois Counter Mode or Poly1305, both of which are universal hash based constructions. Either way, I'm sorry, this is taking up too much of my time now. I would strongly suggest doing some more reading on what universal hashing is or how it works on a fundamental level. The original paper that introduced universal hashing is Universal Classes of Hash Functions by Carter & Wegman. I would suggest starting there. No problem. However, the key is fixed during application's lifetime usually. It's practically the same as selecting a single hash function from a family, for unlimited duration of time, in case of a server application, for example. Okay, abstractly it is an independent argument to the hash function, but functionally it does not change. MAC is another story - there key and/or nonce changes frequently. (GCM is stream cipher's operation mode) You may delete the issue if it seems unuseful or confusing to potential users. Another possible issue I've found in PolyMur's proof, and referring to your stance about 1/M pair-wise collision bound (that uniform randomness of hash outputs is out of the question). What your proof is lacking I think is that it does not show that pair-wise collisions of outputs for m and m' are bounded to 1/M. Your proof relies on pre-requisite that keys are chosen in a uniformly-random manner. But the case when both m and m' use the same key is not covered. So, a central pre-requisite of "universal hashing" (Pr[x and y collide]<1/M) for the case of same keys in x and y is not proven. Your proof relies on pre-requisite that keys are chosen in a uniformly-random manner. But the case when both m and m' use the same key is not covered. It absolutely is. The key is not chosen per message, the key is chosen per function. The choice of k is what determines hash function H from the family. So in the claim Pr[H(m) = x && H(m') = y] <= eps there is only a single H, and thus a single choice of k. Look Alexey... I'm aware of your work on komihash. I say this, because I really need you to change your approach from trying to jump on the first thing that looks wrong to you to trying to learn instead. Universal hash functions are a mathematical construct. To understand their claims and proofs you have to be very careful, and understand the mathematical background. This is work, and it's not intuitive. I understand it can be hard and confusing. But claims this and that isn't proven (especially things as basic as linearity of expectation) show you simply haven't done the homework. Please read the paper I have suggested to you, and really try to understand the math. Another suggestion is to read High Speed Hashing for Integers and Strings and do the exercises. Many of the things you say "hey, this looks wrong" are covered in those exercises - they force you to examine your own assumptions. It is hard, I understand. I struggled with it too! But you have to get these fundamentals right before you can start to analyze more complex constructions. Well, I expected you would tell me to read stuff. Yes, my understanding of "universality" constraints was initially incomplete - the issue I had is with the total collision estimate of a set - I expected it's the first thing a "universal" hashing has to follow. Then I do have issues with terminology - it changes, it may be different in other sources, but it does not mean I do not know that probabilities of two uniformly-random events sum. However, this in no way makes my further questions "absurd". I'm like you are interested in obtaining a proof of universality, and I'm taking your proof as some kind of a basis. For example, how you can prove "linearity of expectation" for your hash if you did not prove collision bound for the case of same keys. MACs are totally different in this respect - they need to prove collision bound for independent streams, not for a local data set. I never said your questions were absurd. They're very natural questions. They are however also very basic questions covered in any text on universal hash functions, and are not questions specific to Polymur. Your questions apply to any universal hash function family. Which is why I said this takes too much of my time, and ask you to do the homework needed to cover the basics. If anything is unclear specifically about Polymur or its design I'm happy to help. I do not say your proof is wrong - I only say it applies to a specific case of MAC hashing, when key is uniquely generated for each hashing operation, without any requirement to store hashes in a single data set. I only say it applies to a specific case of MAC hashing, when key is uniquely generated for each hashing operation Please read section 4 "authenticating multiple messages" from New hash functions and their use in authentication and set equality by Wegman and Carter. It proves that for any strongly universal hash function family you can reuse the same hash function for authentication as long as you hide the hash values from the attacker (which they do with a one-time-pad). There are similar proofs that for any strongly universal hash function hash table access with chaining takes 0(1) expected time. I do not say your proof is wrong - I only say it applies to The thing is, the proven claim has a standard form. It proves the almost (strong) universality. Because of that we can reuse results that only rely on that property, such as the above results. This is why I say your questions are not about Polymur! If your problem isn't with the proof but with its applicability you have general questions about universality, for which I already pointed you to multiple resources to learn from. Is PolyMur "strongly universal", and where I can gather that from the proof? Sorry if that's obvious. In either case, my opinion is that if "non-strong universality" implies collision bound only with a pre-requisite of "random choice of family functions", it is practically only useful for MACs. OK, not arguing with theory, but theories have their limits of applicability, and it's not always obvious from the theory itself where it isn't applicable. Anyway, thanks for the discussion. I still think "almost universal" claims are a bit of an exaggeration, an unfair advantage. And I think most of the hash functions that pass all SMHasher tests are practically "universal" except most of them have no formal proofs. Because SMHasher was built to test some basic statistics that are also claimed for "universal" hash functions. Is PolyMur "strongly universal", and where I can gather that from the proof? Sorry if that's obvious. Polymur is almost-strongly universal. That's this claim from the proof: If H is chosen randomly and independently then for any m != m' and any hash outcomes x, y we have Pr[H(m) = x && H(m') = y] <= n * 2^-124.2 Instead of phrasing it as a probability, the paper by Carter and Wegman simply counts the number of functions H in the family such that H(m) = x && H(m') = y. But it's the same concept, divide this count by the total number of possible functions to get the probability (since we choose the function at random from the family). In either case, my opinion is that if "non-strong universality" implies collision bound only with a pre-requisite of "random choice of family functions", it is practically only useful for MACs. OK, not arguing with theory, but theories have their limits of applicability, and it's not always obvious from the theory itself where it isn't applicable. Well, it's also useful in the case of hash tables as it proves expected O(1) access time when using chaining, which is explicitly the intended purpose of Polymur. You can't just discount this proof 'in your opinion'. This is math, not opinion. Most modern hash table implementations already generate a random hash function per table, or per program, to protect against HashDoS. Anyway, thanks for the discussion. I still think "almost universal" claims are a bit of an exaggeration, an unfair advantage. And I think most of the hash functions that pass all SMHasher tests are practically "universal" except most of them have no formal proofs. Because SMHasher was built to test some basic statistics that are also claimed for "universal" hash functions. I don't think it's unfair. It is easy to make hash functions that pass SMHasher but for which an attacker can still construct arbitrary multicollisions. I don't think it's unfair. It is easy to make hash functions that pass SMHasher but for which an attacker can still construct arbitrary multicollisions. I doubt it, if the function passes Perlin Noise and Seed tests as well. But as SMHasher fork's author notes, quick brute-force generation of collisions is possible for all "fast" hashes. Except server applications usually do not give even a chance to flood with unnecessary information. At the moment I think I'll leave this 40 year old "universality" alone for the sake of happier life. I can't swallow a proof that refers to |K| and its variation whereas functionally k is unchanging.
2025-04-01T06:39:56.065185
2018-11-24T03:55:28
383956588
{ "authors": [ "james-darkfox", "orlp" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9411", "repo": "orlp/slotmap", "url": "https://github.com/orlp/slotmap/pull/20" }
gharchive/pull-request
Add support for custom hashers in SparseSecondaryMap. Closes #14 Merging after #18 will require the tweak for std::hash to core::hash. I would rather merge this first then leave the no std for last. @orlp Rebased onto your master.
2025-04-01T06:39:56.077976
2023-02-10T18:31:57
1580143349
{ "authors": [ "dehidehidehi", "ornicar" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9412", "repo": "ornicar/userstyles", "url": "https://github.com/ornicar/userstyles/pull/5" }
gharchive/pull-request
Extra closing brace removed Call me crazy but I think there's a dangling closing brace. ok but why all the extra spaces? fixed extra space
2025-04-01T06:39:56.095209
2024-02-05T12:32:15
2118432707
{ "authors": [ "b-ma", "orottier" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9413", "repo": "orottier/web-audio-api-rs", "url": "https://github.com/orottier/web-audio-api-rs/pull/447" }
gharchive/pull-request
MediaDevices: add support for MediaTrackConstraints.channelCount Fixes #446 TODO: channelCount should be modeled as a ConstrainULong, not just u32. TODO: cubeb support Cool, really nice you could make it so fast
2025-04-01T06:39:56.111839
2022-01-22T07:16:59
1111309788
{ "authors": [ "b-ma", "orottier", "tasogare3710" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9414", "repo": "orottier/web-audio-api-rs", "url": "https://github.com/orottier/web-audio-api-rs/pull/99" }
gharchive/pull-request
proof of concept: delegation. The point of this exploration is to see if we can hide some of the internals of the (base)audiocontext implementation from the user facing code as discussed in #76 Ideally AsBaseAudioContext should go, as it is not part of the official spec and might surprise users that they have to import that trait. I'm not entirely sure if we can drop it though as it is used in the generics setup. We should be able to drop it from user facing code though The BaseAudioContext must stay as it is part of the spec @tasogare3710 my suggestion for moving forwards would be: scratch the current work and start fresh start by looking at 1 example, e.g. example/simple_delay.rs remove the import of AsBaseAudioContext obviously it won't compile now. Add the required methods to AudioContext with the delegate trait. For example delegate context.create_delay to the base() now you can remove the create_delay method from AsBaseAudioContext rinse, repeat, for more methods and examples and apply to OfflineAudioContext too check if the docs look good for AudioContext, OfflineAudioContext (cargo doc --lib --open) check if there is any user facing code still containing AsBaseAudioContext (I mean the examples, integration tests and docs) Thanks for the comments, it made me realize any choice we make will have disadvantages.. Summarizing the three options at the table structs AudioContext, OfflineAudioContext and BaseAudioContext. trait AsBaseAudioContext for shared functionality and mimicking the inheritance. This is the current state. Advantages: rust-like Disadvantages: users need to import AsBaseAudioContext most of the time, which is not part of the spec structs AudioContext, OfflineAudioContext and ConcreteBaseAudioContext. trait BaseAudioContext for shared functionality and mimicking the inheritance. Advantages: no need for strange imports in user facing code, still powerful and not too bad from a rust perspective Disadvantages: whenever you need to deal with the actual base (this is rare), the struct name may be surprising structs AudioContext, OfflineAudioContext and BaseAudioContext. no traits. We use delegation/macros to make all shared functionality available on the three structs Advantages: no names that are not part of spec Disadvantages: not rust-like. No possibility for function to be generic over a shared trait (we use that in Node constructors now) but we could avoid that issue by forcing to take in a BaseAudioContext always For context, we are using option 2 for AudioNode and AudioScheduledSourceNode which I think works out well. There is no issue there because they are never used as concrete types. All in all I think I am in favour of option 2 now - we model inheritance using traits, always. We could even make the namespace like this: crate::context::{AudioContext, OfflineAudioContext, BaseAudioContext, concrete::BaseAudioContext} @b-ma any final thoughts? Hey, Yup I agree, option 2. seems both the more clean and simple solution. (Then, we will also have to fix the discrepancy of the AudioScheduledSourceNode to make it coherent with this rule too, I will have a look) I dont know for the namespacing concrete::BaseAudioContext, it seems quite good but also a bit strange alongside the BaseAudioContextInner. Wouldn't it be possible to somehow merge BaseAudioContextInner and ConcreteBaseAudioContext (I don't really have a clear picture of the possible implications)? If not possible or just complicating things, maybe doing something like concrete::{BaseAudioContext, BaseAudioContextInner} so that these two stay coherent? Fortunately BaseAudioContextInner is an implementation detail and is not exposed in the public interface. The reason for BaseAudioContext { inner: Arc<BaseAudioContextInner> } is that we clone it many times. All the AudioNodes have static lifetime (meaning, they do not contain references, only owned values) which is great for user friendlyness. The nodes contain an AudioContextRegistration which has reference to the base context via that Arc We could rename BaseAudioContextInner --> BaseAudioFields if that makes it easier for the eyes? Yup I see, thanks for the explanation, looks indeed logical to keep this simple We could rename BaseAudioContextInner --> BaseAudioFields if that makes it easier for the eyes? No, Inner is good I think and already used here and there if I remember well. However, I would go for ConcreteAudioContextInner, so we have: trait BaseAudioContext (which retrieve) ConcreteBaseAudioContext (which holds) ConcreteBaseAudioContextInner (not sure of my keyword relationships but you see the idea :) (Then, we will also have to fix the discrepancy of the AudioScheduledSourceNode to make it coherent with this rule too, I will have a look) Yes I think we can make that one follow the spec better. Right now the trait exposes the implementation detail of Scheduler and then derives the methods from the spec (start, stop etc). We should change that to just the raw trait, require implementers to supply start(_at), stop etc Internally we can use the Scheduler, but we should remove that from the public API (the whole src/control.rs file actually) Was that what you had in mind too @b-ma ? We should change that to just the raw trait, require implementers to supply start(_at), stop etc Internally we can use the Scheduler, but we should remove that from the public API (the whole src/control.rs file actually) Yup, I think it looks really more safe to go that path Thanks for moving the discussion forward @tasogare3710 I'm closing this PR in favour of #102 @orottier Thanks for moving the discussion forward @tasogare3710 I'm closing this PR in favour of #102 Got it. it's my pleasure.
2025-04-01T06:39:56.175136
2023-04-03T11:28:57
1651881131
{ "authors": [ "iblancasa", "pavolloffay" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9415", "repo": "os-observability/redhat-rhosdt-samples", "url": "https://github.com/os-observability/redhat-rhosdt-samples/pull/4" }
gharchive/pull-request
Add deployment example This PR adds an example of how to use the OpenTelemetry Collector (Red Hat Distribution) as deployment. @iblancasa please wait for approvals before merging @pavolloffay since we talked and you told me it looks OK, I decided to merge after your last requests. Sorry for that.
2025-04-01T06:39:56.204955
2024-01-26T20:14:57
2102836814
{ "authors": [ "osalabs", "vladsavchuk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9416", "repo": "osalabs/osafw-asp.net-core", "url": "https://github.com/osalabs/osafw-asp.net-core/pull/106" }
gharchive/pull-request
Update FwController.cs Will not be excessive, because the JSON reposnse doesn't get return_url value as location in afterSaveLocation() earlier. I used it with modals and form ajax submits to other controllers. Let's discuss this. Wouldn't it be better to update afterSaveLocation instead? So we can have just a single location response? Having 2 urls in response doesn't seem right. Revoking this pull-request. afterSaveLocation must be reviewed for the JSON. Currently. the return_url parameter is doubled for the JSON location result. And I noticed that the HTTP error code has been changed from 200 to 400 for the validation exception, so other chanes are nesassairly (in autosave code, for example) to catch the form errors. Currently, I parese the location result and check extract the return_url. My
2025-04-01T06:39:56.208600
2015-12-27T00:24:50
123954590
{ "authors": [ "PhilippSalvisberg", "osalvador" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9417", "repo": "osalvador/tePLSQL", "url": "https://github.com/osalvador/tePLSQL/issues/10" }
gharchive/issue
Include template from schema does not work Hello Oscar, I started working on the oddgen project. The goal is to support several template languages, one of them is tePLSQL. I've tried to include a template using <%@ include(templ, my_package, package_body, demo) %>. This was not working for several reasons: There is a typo in tePLSQL.pkb on line 532 (l_object_type instead of l_schema). dbms_metadata.get_ddl fails when called from another user, because dbms_metadata requires the SELECT_CATALOG_ROLE which is not visible in a definer rights package. The first problem is easy to fix ;-). For the second one I see the following options: from 12.1 on you may grant the role to the package directly, e.g. GRANT select_catalog_role TO PACKAGE teplsql.teplsql;. This was my solution. to support older Oracle versions you may switch to invoker rights or you may access the dba_source view directly Thanks. Best Regards, Philipp Thanks Philipp for finding this bug. I will fix it soon.
2025-04-01T06:39:56.211732
2024-02-29T16:16:02
2161622284
{ "authors": [ "supakeen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9418", "repo": "osbuild/images", "url": "https://github.com/osbuild/images/pull/485" }
gharchive/pull-request
fedora: webui only on rawhide The webui for Fedora has been delayed until Fedora 41, current rawhide. Let our version gates reflect that. Speaking of, do we want a const RAWHIDE somewhere in distro/fedora? Duplicate of #479.
2025-04-01T06:39:56.260065
2021-12-04T17:07:23
1071252499
{ "authors": [ "AlexandreMarkus", "l0drex" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9419", "repo": "oskarsh/Yin-Yang", "url": "https://github.com/oskarsh/Yin-Yang/issues/110" }
gharchive/issue
Support icon theme I would like to be able to choose different icon themes for light/dark Which icons do you mean? Do you use Gnome, KDE or something else? Or do you mean an application? Which icons do you mean? Do you use Gnome, KDE or something else? Or do you mean an application? I mean all applications icons. I use KDE. Ok, this was already requested in #50
2025-04-01T06:39:56.265157
2015-08-15T09:46:13
101155956
{ "authors": [ "harry-wood" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9420", "repo": "osm-cwg/posts", "url": "https://github.com/osm-cwg/posts/issues/8" }
gharchive/issue
Birthday and hack weekend I should've done a blog post about the birthday and London hack weekend earlier. My excuse is that somebody else had the list of hacks. He's published that now so I think I'll write something and put it out today: https://hackpad.com/Birthday-and-hack-weekend-HJAjZHsJ8eb posted https://blog.openstreetmap.org/2015/08/15/birthday-hack-weekend/
2025-04-01T06:39:56.271509
2024-01-13T07:50:24
2080165292
{ "authors": [ "KasperFranz", "matkoniecz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9421", "repo": "osm-quality/wikibrain", "url": "https://github.com/osm-quality/wikibrain/issues/9" }
gharchive/issue
failing test - test_children_center_is_not_an_intentional_human_activity This test is now failing since instance of was changed from camp to summer camp. https://www.wikidata.org/w/index.php?title=Q706474&diff=2049000673&oldid=1982054285 I think it might be worth finding an an alternative camp for this test case Can you try following https://github.com/osm-quality/wikibrain/blob/master/CONTRIBUTING.md#nonsense-reports ? Feel free to write here on first encounter with something unclear/confusing.
2025-04-01T06:39:56.279557
2018-02-05T10:14:26
294338262
{ "authors": [ "nlehuby" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9422", "repo": "osm-without-borders/cosmogony", "url": "https://github.com/osm-without-borders/cosmogony/issues/23" }
gharchive/issue
Explore the cosmogony We will need a visualization tool (or maybe several tools ?) for our day to day usage of cosmogony. The purpose of this issue is to gather our needs. Then we may summarize it in the readme of this repo. Visual coverage by zone type we want to explore the world on a map, select an zone type and see the existing zones of this type on the map to get an idea of the coverage a POC has been done in this repo, PR : https://github.com/osm-without-borders/cosmogony_explorer/pull/1 View zone metadata we want to select a zone and get all its metadata (names in different languages, wikidata id, etc) All zones containing a point we want to click the map and see all the zones including the point Explore the hierarchy we want to select a zone, and get an idea of its hierarchy : see all its parent zones see all its direct child zones see all its child zones, cascading the hierarchy see its other linked zones Download some zones we want to select some zones (selecting from the map and/or using the hierarchy) and download them in a GIS friendly format (at least geojson, with metadata as properties). Quality assurance We want some dashboard with the coverage tests results described in issue #4 We have a pretty good start in this repo : https://github.com/osm-without-borders/cosmogony_explorer [x] Visual coverage by zone type [x] View zone metadata [x] Explore the hierarchy
2025-04-01T06:39:56.307009
2018-07-19T07:42:35
342614261
{ "authors": [ "SunStormRain", "joto" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9423", "repo": "osmcode/libosmium", "url": "https://github.com/osmcode/libosmium/issues/261" }
gharchive/issue
add_metadata list of attributes - no effect on history file output I would like to write osm history files with only version and timestamp information using libosmium release v2.14.0 (Debian GNU/ Linux 9). I used the following command separating the attributes with + like suggested in the release note: osmium cat file-history.osm -o file-history-VT.osm -f osm,add_metadata=version+timestamp --overwrite I expected version and timestamp data to be included but no uid, no user and no changeset data. The output file included all of them like I would have used add_metadata=true. I can't reproduce this. Are you sure you are calling the right osmium program? Try osmium version, it tells you which version of the osmium program and of libosmium it is. Yes, osmium version showed libosmium version 2.13.1. Right hint! Thank you
2025-04-01T06:39:56.322677
2023-06-06T22:29:52
1744710309
{ "authors": [ "dzmitry-lahoda" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9424", "repo": "osmosis-labs/beaker", "url": "https://github.com/osmosis-labs/beaker/pull/121" }
gharchive/pull-request
not ignore lock, because i cannot compile until copied lock for docs.rs so without lock people cannot compile, no cannot compile deterministically to safety if i recall rust guidline for apps to add lock into repo
2025-04-01T06:39:56.327140
2022-08-12T04:03:40
1336723259
{ "authors": [ "iboss-ptk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9425", "repo": "osmosis-labs/osmosis-rust", "url": "https://github.com/osmosis-labs/osmosis-rust/issues/24" }
gharchive/issue
Reducing dependency osmosis-std wasm size impact Background Potentially due to prost the size of wasm generated is around +200k Expectation see if the size impact is really from prost or not, if so, find a way to reduce it if not possible, considering remove prost and make everything json serializable in cosmwasm It seems that osmosis-std doesn't really cause the issue. close for now.
2025-04-01T06:39:56.332614
2022-10-06T01:30:58
1398600080
{ "authors": [ "RusAkh", "georgemc98" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9426", "repo": "osmosis-labs/osmosis", "url": "https://github.com/osmosis-labs/osmosis/issues/2956" }
gharchive/issue
[x/gamm]: Add RPC query endpoints for joining and exiting pools Background It would be helpful to have a RPC query that returns the lp shares when providing liquidity to a pool. It would also be helpful to have a query that returns the assets returned withdrawing liquidity from a liquidity pool. These queries will be useful when using a StargateQuery in cosmwasm smart contracts. Suggested Design Here is the commit for the PoolType query that was added recently: https://github.com/osmosis-labs/osmosis/commit/e55e13d709628cff2075ae1576f993ed2a8c310e. The design should follow the steps taken in this commit to add RPC end points for the four following queries: QueryJoinPool Parameters: QueryJoinPoolSharesRequest{ PoolId, TokenInMaxs} Returns: QueryJoinPoolSharesResponse { ShareOutAmount, TokenIn } QueryJoinSwapExactAmountIn Parameters: QueryJoinSwapExactAmountInRequest{ PoolId, TokenIn} Returns: QueryJoinSwapExactAmountInResponse { ShareOutAmount } QueryExitPool Parameters: QueryExitPoolSharesRequest{ PoolId, ShareInAmount } Returns: QueryExitPoolSharesResponse { TokenOut } QueryExitSwapShareAmountIn Parameters: QueryExitSwapShareAmountInRequest{ PoolId, TokenOutDenom, ShareInAmount } Returns: QueryExitSwapShareAmountInResponse { TokenOutAmount } I would like to work on this issue! I would like to work on this issue! Excellent. I am going to start on it today.
2025-04-01T06:39:56.385950
2019-02-20T00:29:07
412188210
{ "authors": [ "caguero", "iche033" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9427", "repo": "osrf/ros1_ign_bridge", "url": "https://github.com/osrf/ros1_ign_bridge/pull/4" }
gharchive/pull-request
Add Pose / Transform msg support Add conversion between ign::msgs::Pose and geometry_msgs/Pose geometry_msgs/PoseStamped geometry_msgs/Transform geometry_msgs/TransformStamped Unlike previous msgs types, this is a one-to-many mapping. This is possible since the ign::msgs::Pose msg captures most of the fields in the above ROS geometry msgs, with the following exception: geometry_msgs/TransformStamped's child_frame_id field. Initial plan was to introduce a frame field in ignition::msgs::Pose but later examination of the current ros1_ign_bridge implementation reveals that a similar ros msg field, frame_id, is being stored in an any field in ign::msgs::Header. So I took the same approach and store the child_frame_id in the ign header msg. Looks good to me! +1
2025-04-01T06:39:56.387508
2020-02-03T09:42:17
558961314
{ "authors": [ "aaronchongth" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9428", "repo": "osrf/traffic_editor", "url": "https://github.com/osrf/traffic_editor/pull/47" }
gharchive/pull-request
Bug/fix dict illegal accesses added some None initializations added some checks before accessing These should fix problems for super minimal maps that have no doors, or some elements oof, yeah I didn't dig through it deep enough to figure out if they were basic maps/dictionaries or some other fancy collection. Sure thing.
2025-04-01T06:39:56.415957
2021-01-07T22:19:51
781652443
{ "authors": [ "gaoyaqing12", "woznik" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9429", "repo": "oss-review-toolkit/ort", "url": "https://github.com/oss-review-toolkit/ort/issues/3491" }
gharchive/issue
python code analyze error my system has intalled python 3.7.3,this is my python version and pip version C:\Users\tony>python -V Python 3.7.3 C:\Users\tony>pip -V pip 19.0.3 from c:\users\tony\appdata\local\programs\python\python37\lib\site-pa ckages\pip (python 3.7) C:\Users\tony> when i analyze a python project include requirements.txt it show a error in my analyze.result.json "issues" : { "PIP::requirements.txt:" : [ { "timestamp" : "2021-01-07T21:58:27.699009500Z", "source" : "PIP", "message" : "Resolving dependencies for 'requirements.txt' failed with: IOException: Running 'py -2 C:\\Users\\tony\\AppData\\Local\\Temp\\python_interpreter17850449696562758315.py' in 'C:\\Users\\tony\\Desktop\\ort\\ort' failed with exit code 103:\nInstalled Pythons found by py Launcher for Windows *\n\nRequested Python version (2) not installed, use -0 for available pythons\n", "severity" : "ERROR" } ] } i don't know what i can do next for continue Hello @gaoyaqing12 How did you manage to resolve that issue?
2025-04-01T06:39:56.421444
2022-08-02T01:08:32
1325156016
{ "authors": [ "azeemshaikh38", "laurentsimon", "naveensrinivasan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9430", "repo": "ossf/scorecard", "url": "https://github.com/ossf/scorecard/issues/2115" }
gharchive/issue
Release scorecard Release scorecard https://github.com/sigstore/sigstore-java/issues/2#issuecomment-1201899264 @laurentsimon @azeemshaikh38 Planning to release at this SHA 69eb1ccf1d0cf8c5b291044479f18672bf250325. SGTM. Thank you! https://github.com/ossf/scorecard/releases/tag/v4.5.0
2025-04-01T06:39:56.423127
2023-06-16T12:54:22
1760587483
{ "authors": [ "andrelmbackman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9431", "repo": "ossf/scorecard", "url": "https://github.com/ossf/scorecard/issues/3173" }
gharchive/issue
removal of --verbosity flag As I was tinkering with the scorecard, I encountered the --verbosity [string] flag. No options are given, and the argument string seems to be disregarded. Any input string seems to be accepted, yet it is set to the default level of 'info'. I may be wrong, but for now this --verbosity option seems redundant and removable. Alternatively, options such as debug, error etc. could be implemented and described in the help message. Thank you for getting back to me about this. My bad; I will look into it and propose some changes, starting with displaying the options for the --verbosity flag in the usage message.
2025-04-01T06:39:56.429077
2020-12-06T09:39:44
757885794
{ "authors": [ "osso73" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9432", "repo": "osso73/classic_games", "url": "https://github.com/osso73/classic_games/issues/2" }
gharchive/issue
Document code for pong Add docstrings and comments in the code Done. Code documented at the same time as creating the tests, as per commit cc9a398991bb19ef984eeea47067baad594abfc2.
2025-04-01T06:39:56.440298
2021-11-18T20:32:50
1057774018
{ "authors": [ "RishabhSaini", "cgwalters" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9433", "repo": "ostreedev/ostree-rs-ext", "url": "https://github.com/ostreedev/ostree-rs-ext/issues/163" }
gharchive/issue
test that we support image signatures xref https://github.com/containers/skopeo/issues/1482 We should validate that we're doing image signatures via the proxy correctly. In this issue, the great thing about the new ostree-native-container flow is that if you have a setup to sign container images, that exact same setup can be used to sign OS updates. See https://docs.podman.io/en/latest/markdown/podman-image-sign.1.html and https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/signing_container_images for some old-style GPG signatures. As of recently the containers/image stack gained support for "cosign", see https://github.com/containers/skopeo/pull/1701 To test the recent changes for policy verification: Modify /etc/containers/policy.json for sigstore/gpg signed images from the remote-registry Sign an existing fcos image and push to the remore-registry Try doing an rpm-ostree rebase ${signed-image} Ensure it fails Since we do not currently sign fcos or any ostree-based-images, we need to have signed images available. Initially it was thought this can be done locally by doing skopeo copy docker://quay.io/fedora/fedora-coreos:testing-devel oci:/var/lib/containers/signed-local-registry/sigstore/test.oci --sign-by-sigstore-private-key fcos.key . But this fails unfortunately, since how sigstore signs images is by pushing the artifacts generated to the remote-registry. Hence, signing an local oci or dir does not work. It instead gives the following error: Cannot determine canonical Docker reference for destination oci:/var/lib/containers/signed-local-registry/sigstore/test.oci. Instead we need to be able to push this to some ephemeral testing Docker image registry. The perfect candidate was ttl.sh as mentioned in sigstore doumentation, but unfortunately fcos image exceeds the maxiumum image size limit there. Is there any other registry we could push to and verify instead? So CI on this repository mainly uses GHA, for which there is https://docs.github.com/en/actions/using-containerized-services/about-service-containers But that's just sugar for running a container...we can run any registry (quay.io, docker/distribution or whatever) inside a GHA job right?
2025-04-01T06:39:56.511355
2022-05-23T00:15:10
1244409771
{ "authors": [ "eemiily", "oswinrodrigues" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9434", "repo": "oswinrodrigues/steward-little", "url": "https://github.com/oswinrodrigues/steward-little/issues/10" }
gharchive/issue
Consider using Notion API To add a database item, one purchase at a time, rather than manually importing a CSV at the end. Is this valuable? Even if we're already generating monthly CSVs in the process? Resolved in 21d1e929a30c825ffca5849a6bf27f39e4a82205. I experimented with bypassing the categorized CSV and going straight from categorization to Notion, but it was super slow to import (> 5 min for one month of data, and even then it failed partway through). So, keeping the CSVs seemed to be a better choice with respect to both performance and safety. Until we have a solution for checking against duplicate entries, partial uploads are a big issue.
2025-04-01T06:39:56.549250
2024-02-29T14:09:04
2161356304
{ "authors": [ "omris94", "orishoshan", "volk1234" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9435", "repo": "otterize/credentials-operator", "url": "https://github.com/otterize/credentials-operator/issues/112" }
gharchive/issue
Add support for marking AWS roles and policies as unused instead of deleting them upon cleanup Users have requested to be able to configure the credentials operator to not delete AWS IAM roles and policies, but instead tag them as unused, a sort of "soft delete" mode. @omris94 please add info on how this feature will be configured https://github.com/otterize/intents-operator/issues/366 I suggest the following solution: To indicate that AWS IAM roles and policies should only be soft deleted when they are not used anymore, there are two methods to follow: If you want to avoid deletion of AWS roles and policies corresponding with a specific pod, label the pod as credentials-operator.otterize.com/aws-use-soft-delete=true. If you want to globally avoid the deletion of AWS roles and policies, initialize the credentials-operator with the --aws-use-soft-delete=true flag. You can set this flag by adjusting the helm chart's value (global.aws.useSoftDelete). How would the soft deletion of AWS IAM roles and policies look like? Soft deletion of an AWS IAM policy will be performed by tagging a policy as otterize/softDeletedAt=<timeOfDeletion> Soft deletion of an AWS IAM role will be performed by tagging a role and all of its policies as otterize/softDeletedAt=<timeOfDeletion> When does the soft deletion occur? Every case that would cause a deletion without this feature, such as serviceAccount deletion for roles or ClientIntent deletion for policies. At the moment, we don't have the necessary logic to remove orphaned policies/roles. This means that if you start with --aws-use-soft-delete=true or its pod-level equivalent, you will need to switch it back to false before deleting the pod or ClientIntents to ensure that the roles/policies are removed. However, everything is possible so we may be able to implement this logic in the future. So the answer is yes. Means that if I switch back operator will start manage role/policy. Looks good then.
2025-04-01T06:39:56.557337
2017-08-14T14:07:39
250041688
{ "authors": [ "o-o00o-o", "otykier" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9436", "repo": "otykier/TabularEditor", "url": "https://github.com/otykier/TabularEditor/issues/82" }
gharchive/issue
Integration with Source Control to ensure new items get added We are moving to Tabular Editor for the save to Folder capability which provides 3 way merging safety and simplicity. However one tradeoff is that for new objects (tables, columns, measures etc) the files require remembering to "Add" to the workspace otherwise these parts of the model are forgotten on the merge. It would be great if somehow this could be solved as we have given safety and efficiency with one hand but taken away with the other. Of course full source control integration might be overkill but some support for at least TFS and GIT would be greatly appreciated Yes, we're facing the same issue internally at one of our clients. Since apparently this is a useful feature, I will investigate if there is some easy way to enable better source control integration with Tabular Editor. See also issue #67. Perhaps Tabular Editor can somehow "detect" that it's saving files back to a source controlled folder and then ensure that new files are automatically added to the TFS/git workspace. Will get back to you when I have an update. Highly recommend using Git together with the Save to Folder option, as Git automatically detects file additions/deletions, where as TFS/TFVC does not... Tabular Editor will not get support for TFS/TFVC, as Git is the more popular source control system. Tracking Git integration on issue #104. Closing this issue.
2025-04-01T06:39:56.562170
2022-12-14T17:32:02
1497095317
{ "authors": [ "alexmojaki", "lslunis", "stuhlmueller" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9437", "repo": "oughtinc/ice", "url": "https://github.com/oughtinc/ice/pull/161" }
gharchive/pull-request
Infer call children from parents Minor changes to things in the trace format that were bothering me: Rather than storing children explicitly in the trace, infer them from the parents when reading the trace. So no more "01GM8SWJ2YKM8QQNSRA07W3VQR.children.01GM8SWJ2Y29PZMSKVH0AJQHQC":true. This: Saves bytes. Should allow fixing https://github.com/oughtinc/ice/pull/141#discussion_r1042698274 better. Means that the type children?: Calls in CallInfo is actually true now, i.e. the values of children are calls instead of booleans. Emit a single value which is combined into the existing calls using lodash's merge instead of set with paths. In particular this means the end lines change from this: { "01GM8SWJ2Y29PZMSKVH0AJQHQC.result": ..., "01GM8SWJ2Y29PZMSKVH0AJQHQC.shortResult": ..., "01GM8SWJ2Y29PZMSKVH0AJQHQC.end": ... } to this: { "01GM8SWJ2Y29PZMSKVH0AJQHQC": { "result": ..., "shortResult": ..., "end": ... } } which is more space efficient, more natural, and easier to work with (no callId = path.split(".")[0]). Haven't actually tested this yet. Is this ready for review? Haven't actually tested this yet. This was because of the broken dev server. Just tried building the UI, got an error as soon as I looked at a trace. Will revisit this once the HMR issue is fixed. Closing for now, feel free to reopen.
2025-04-01T06:39:56.564768
2017-08-09T13:22:06
249028821
{ "authors": [ "abom", "lucasvanhalst" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9438", "repo": "our-city-app/oca-backend", "url": "https://github.com/our-city-app/oca-backend/issues/438" }
gharchive/issue
Event with multiple dates only shows first date See https://rogerthat-server.appspot.com/internal/shop/questions/6621295077228544 Event with multiple dates Only first date is shown in the app Example: Event is only shown on the 28th in the app, and not on 29, 30 or 31 As @bart-at-mobicage mentioned, it would be better to list all the dates and their events, instead of just listing the first date, and the user needs to go to event details to see other dates.
2025-04-01T06:39:56.566263
2018-05-14T07:50:32
322706988
{ "authors": [ "bart-at-mobicage" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9439", "repo": "our-city-app/oca-backend", "url": "https://github.com/our-city-app/oca-backend/issues/887" }
gharchive/issue
Add an OCA branded watermark to main branding Add a watermark, like we did with DJ-Matic services Looks like watermarks aren't supported yet in native brandings
2025-04-01T06:39:56.569126
2023-03-17T17:10:26
1629657914
{ "authors": [ "psatyajeet" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9440", "repo": "ourzora/nouns-builder", "url": "https://github.com/ourzora/nouns-builder/pull/146" }
gharchive/pull-request
Add token holdings cutoff date to message Description Motivation & context Code review Type of change [ ] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) Checklist [ ] I have done a self-review of my own code [ ] Any new and existing tests pass locally with my changes [ ] My changes generate no new warnings (lint warnings, console warnings, etc) Ah! Created this PR too early
2025-04-01T06:39:56.600024
2018-11-09T23:15:47
379354785
{ "authors": [ "davidbirchwork", "senakafdo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9441", "repo": "ove/ove-asset-services", "url": "https://github.com/ove/ove-asset-services/issues/26" }
gharchive/issue
[Image Tiles Service] Consistency bug: AssetManagerHost vs ServiceHostUrl in appsettings.json We should have either AssetManagerHost and ServiceHost or AssetManagerHostUrl and ServiceHostUrl. Fixed in PR #27. lol not much point assigning me these issues if they are opened, assigned, a fixed committed and merged over a weekend before i get to see them ;) thanks for the code improvement @senakafdo
2025-04-01T06:39:56.657522
2022-12-28T03:21:30
1512357038
{ "authors": [ "Mkeefeus", "thelindat" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9442", "repo": "overextended/ox_target", "url": "https://github.com/overextended/ox_target/issues/53" }
gharchive/issue
[Bug] Debugs are not removed on Zone Removal When a zone is removed, the debug draws don't seem to be removed as well. In the screen shot the zones themselves are not present as I can't target the stands, but the spheres are still there, and sometimes would double up when the resource is started again (see below) Was an ox_lib issue and has been resolved.
2025-04-01T06:39:56.672335
2021-04-29T19:23:41
871383513
{ "authors": [ "overlookmotel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9443", "repo": "overlookmotel/livepack", "url": "https://github.com/overlookmotel/livepack/issues/169" }
gharchive/issue
Only serialize properties of objects which are accessed Input: const obj = { x: 1, y: 2, z: 3 }; export default () => obj.x; Current output: export default ( obj => () => obj.x )( { x: 1, y: 2, z: 3 } ); As x is only property of obj which is accessed, this could be reduced to: export default ( x => () => x )( 1 ); Optimizations The following optimizations can be applied: 1. Omit unused properties Where only certain properties of an object are accessed, any other properties can be omitted: // Input const obj = { x: 1, y: 2, z: 3 }; export default () => obj.x + obj['y']; // Output const obj = { x: 1, y: 2, z: 3 }; export default ( obj => () => obj.x + obj.y )( { x: 1, y: 2 } ); Note property z has been discarded in output. 2. Break object properties apart with scopes Where object is never used as a whole (only individual properties accessed by name), each property can be split into a separate scope var. // Input const obj = { x: 1, y: 2, z: 3 }; export default { getX: () => obj.x, setX: v => obj.x = v, getY: () => obj.y, setY: v => obj.y = v }; // Output const scope1 = ( x => [ () => x, v => x = v ] )( 1 ), scope2 = ( y => [ () => y, v => y = v ] )( 2 ); export default { getX: scope1[0], setX: scope1[1], getY: scope2[0], setY: scope2[1] }; getX() + setX() can be code-split into a separate file from getY() + setY(). 3. Break object properties apart with object wrappers Where object is never used as a whole (only individual properties accessed by name), each property can be wrapped in a separate object. // Input - same as (2) const obj = { x: 1, y: 2, z: 3 }; export default { getX: () => obj.x, setX: v => obj.x = v, getY: () => obj.y, setY: v => obj.y = v }; // Output const objX = { x: 1 }, objY = { y: 2 }; export default { getX: ( objX => () => objX.x )( objX ), setX: ( objX => v => objX.x = v )( objX ), getY: ( objY => () => objY.y )( objY ), setY: ( objY => v => objY.y = v )( objY ) }; Using 2 wrapper objects is slightly more verbose than output from optimizations (1) or (2), but more code-splittable than either. getX(), setX(), getY() and setY() could each be in separate files with objX and objY split into separate common files. 4. Reduce to static values Where a property is read only (never written to in any functions serialized), the property can be reduced to a static value. // Input const obj = { x: 1, y: 2, z: 3 }; export default { getX: () => obj.x, getY: () => obj.y }; // Output export default { getX: ( x => () => x )( 1 ), getY: ( y => () => y )( 2 ) }; This is completely code-splittable. It's more efficient than any of the other approaches above, but only works if obj.x and obj.y are read-only. Optimization killers None of these optimizations can be used if: Object used standalone e.g. const objCopy = obj; or fn( obj ) Object properties accessed with dynamic lookup e.g. obj[ name ] Object passed as this in a method call e.g. obj.getX() and .getX() uses this Property is getters/setters Property is not defined, so access will fall through to object's prototype Property may be deleted by code elsewhere (delete obj.x) so a later access may fall through to object's prototype An eval() has access to object in scope (no way to know ahead of time how the object will be used) Tempting to think could still apply optimization (3) in cases of undefined properties by defining object wrapper as objX = Object.create( originalObjectPrototype ). However, this won't work as it's possible object's prototype is altered later with Object.setPrototypeOf(). It's impossible to accurately detect any changes made to the object with Object.defineProperty() - which could change property values, or change properties to getters/setters. However, this isn't a problem - the call to Object.defineProperty( obj ) would involve using the object standalone, and so would prevent optimization due to restriction (1) above. ESM These optimizations would also have effect of tree-shaking ESM (#53). ESM is transpiled to CommonJS in Livepack's Babel plugin, prior to being run or serialized: // Input import { createElement } from 'react'; export default () => createElement( 'div', null, 'Hello!' ); // Transpiled to (before code runs) const _react = require('react'); module.exports = () => _react.createElement( 'div', null, 'Hello!' ); Consequently, when this function is serialized, the whole of the _react object is in scope and is serialized, whereas all we actually need is the .createElement property. Optimization (4) (the most efficient one) would apply, except in case of export let where the value of the export can be changed dynamically in a function (pretty rare case). Difficulties I can foresee several difficulties implementing this: Which optimization (if any) to apply cannot be determined until entire app has been serialized to know whether (a) any optimization killer applies and (b) whether properties are read-only or not. Where a property is called as a method e.g. _react.createElement(), createElement()'s code must be analysed to see if it uses this. Optimizations can only be used if it doesn't. Both of the above mean serialization of function scope vars needs to happen later than at present, to avoid serializing the whole object if only one property will be included in output. This will complicate identifying circular references. Tempting to think that functions accessing a read-only object property can be optimized to access only that property even if another function accesses the object whole. However, that's not possible, as the function accessing the object whole could use Object.defineProperty() to redefine that property - so it's actually not read-only at all. const O = Object, d = O['define' + 'Property']; function write(o, n, v) { d( o, n, { value: v } ); } const obj = { x: 1, y: 2 }; export default { getX() { return obj.x; }, setX(v) { write(obj, 'x', v); } }; setX() writes to obj.x but it's not possible through static analysis to detect that it will do this. So can't know if obj.x is read only or not. You can optimize getX() if only other use of obj is via dynamic property lookup (obj[ name ]) and not within an assignment (obj[ name ] = ...). Detecting read-only properties will also need to detect assignment via deconstruction e.g.: ({a: obj.x} = {a: 123}) Two-phase serialization now has its own issue: #426
2025-04-01T06:39:56.674252
2019-08-27T12:32:32
485781540
{ "authors": [ "overlookmotel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9444", "repo": "overlookmotel/react-async-ssr", "url": "https://github.com/overlookmotel/react-async-ssr/issues/33" }
gharchive/issue
Broken with React 16.9.0 With React/ReactDOM 16.9.0 all tests relating to rehydration on client are failing. Am not sure what the cause is. Have locked dependency to 16.8.x for now and put a note in README not to use 16.9.0. Fixed in v0.5.2.
2025-04-01T06:39:56.677029
2020-07-04T05:36:30
650831764
{ "authors": [ "sean1093", "tkforce" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9445", "repo": "overpartylab/cocktails-guide-book", "url": "https://github.com/overpartylab/cocktails-guide-book/pull/30" }
gharchive/pull-request
Feature/#28 enable fuzzy search Enable fuzzy search Major change Enable fuzzy search in the cocktail search function change: onClickSearch change GA script after UI body rendered In order mot to block the user in the slow network env. Library add "fuse.js": "6.4.0" https://www.npmjs.com/package/fuse.js Nice!
2025-04-01T06:39:56.689564
2022-07-23T18:16:48
1315717187
{ "authors": [ "daleglass" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9446", "repo": "overte-org/tivolicloud", "url": "https://github.com/overte-org/tivolicloud/issues/169" }
gharchive/issue
Import 369da2d262: Merge remote-tracking branch 'origin/master' (by Caity) This is an automated proposal to look at a commit made by Caity and import it into Overte Commit: 369da2d262d7da45373da7cf89b375eeeb846c24 Author: Caity Date: Tue, 04 Feb 2020 18:47 Merge remote-tracking branch 'origin/master' Stats: Filename Stats Lines Added Removed Lines in blame .gitlab-ci.yml 92 92 0 ⚠ 0 docker/.gitignore 1 1 0 ⚠ 0 docker/Dockerfile 22 22 0 ⚠ File gone docker/digitalocean.json 36 36 0 ⚠ File gone docker/docker-compose.yml 21 21 0 ⚠ File gone docker/modify-domain-port.py 32 32 0 ⚠ File gone docker/supervisor.conf 59 59 0 ⚠ File gone 7 files - 263 263 0 0 To work on this, please assign the issue to yourself, then look at the commit and decide whether this would be a good addition to Overte. If the commit is useful, tag it with "Tivoli: Keep", and keep it open until it's merged. If the commit is not useful, tag it with "Tivoli: Discard", and close it. If the commit is not useful right now, but might be later, tag it with "Tivoli: Maybe later", and close it. If it's hard to decide, tag it with "Tivoli: Discuss", and keep it open. Useful commits should be submitted as a PR against Overte. Tag this issue in PR, so that it's automatically closed once the PR is merged. You can cherry-pick this issue with this command: git cherry-pick 369da2d262d7da45373da7cf89b375eeeb846c24 Duplicate of #149
2025-04-01T06:39:56.724252
2015-09-02T20:15:56
104562531
{ "authors": [ "dopeboy", "owais" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9447", "repo": "owais/django-webpack-loader", "url": "https://github.com/owais/django-webpack-loader/issues/12" }
gharchive/issue
Issue with creating production bundles Hi, Thanks very much for publishing this and the supplementary guide. It is very very helpful. For webpack.prod.config.js, I had to remove the last curly brace. I also had to add the following two lines: var webpack = require('webpack'); var BundleTracker = require('webpack-bundle-tracker'); When I run ./node_modules/.bin/webpack --config webpack.prod.config.js it expects additional parameters. Am I doing something wrong? Can you share full config and full traceback please? I started with what is in your blog post here at the bottom under 'webpack.prod.config.js'. Running ./node_modules/.bin/webpack --config webpack.prod.config.js I got /home/manish/Work/mundaii/webpack.prod.config.js:6 new BundleTracker({filename: './webpack-stats-prod.json'}) ^ ReferenceError: BundleTracker is not defined at Object.<anonymous> (/home/manish/Work/mundaii/webpack.prod.config.js:6:10) Adding var BundleTracker = require('webpack-bundle-tracker'); gets me /home/manish/Work/mundaii/webpack.prod.config.js:12 new webpack.DefinePlugin({ ^ ReferenceError: webpack is not defined at Object.<anonymous> (/home/manish/Work/mundaii/webpack.prod.config.js:12:7) Adding var webpack = require('webpack'); gets me webpack 1.12.0 Usage: https://webpack.github.io/docs/cli.html . . . Final webpack.prod.config.js looks like var config = require('./webpack.config.js'); var webpack = require('webpack'); var BundleTracker = require('webpack-bundle-tracker'); config.output.path = require('path').resolve('./assets/dist'); config.output.pathName = '/production/path/to/bundle/directory'; // This will override the url generated by django's staticfiles config.plugins = [ new BundleTracker({filename: './webpack-stats-prod.json'}), // removes a lot of debugging code in React new webpack.DefinePlugin({ 'process.env': { 'NODE_ENV': JSON.stringify('production') }}), // keeps hashes consistent between compilations new webpack.optimize.OccurenceOrderPlugin(), // minifies your code new webpack.optimize.UglifyJsPlugin({ compressor: { warnings: false } }) ]; You'll have to add module.exports = config; to the bottom of the production config file. I've updated the blog post. Thanks for pointing this out. It works now, thanks @owais. FYI - you're still missing var BundleTracker = require('webpack-bundle-tracker'); in your updated blog post.
2025-04-01T06:39:56.743592
2023-06-27T15:37:37
1777253740
{ "authors": [ "larsyencken" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9448", "repo": "owid/etl", "url": "https://github.com/owid/etl/pull/1275" }
gharchive/pull-request
:hammer: Support strict primary key enforcement This change causes a warning to happen any time you save a table without a primary key. It also allows a strict mode in which the warning becomes an exception. Motivation We are trying to move more metadata to the indicator level. The dimensions an indicator has is a really important piece of metadata that is make or break for automatically collecting, indexing and reusing the indicators. How it will be used The Buildkite tasks that build production data and do the nightly full build will not be run in strict mode. However, PRs will be run in strict mode. This is to avoid any downtime for the ETL, whilst strongly encouraging future changes to have a primary key. For more background of where I think we should go with dimensions, have a read here: https://www.notion.so/owid/2023-06-27-Proposal-for-dimensions-in-the-ETL-9e1a26fec3b94ad2a33ca8fab14b090a?pvs=4 It's a good idea. The only annoying thing is that, in practice, we are always setting index before saving, and then resetting index after loading. In an ideal world working with multi-index tables would be easy. But overall I think it's a safe approach. @pabloarosado Have a read of the proposal I wrote in Notion as well. I think we should move to something simpler, which is just using dim_ in front of dimension columns. E.g. dim_country, dim_year. It's a common convention that's also pretty self-explanatory. We can could even support new and old ways of doing this in a backwards-compatible way.
2025-04-01T06:39:56.745733
2023-10-16T13:15:40
1945198170
{ "authors": [ "pabloarosado" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9449", "repo": "owid/etl", "url": "https://github.com/owid/etl/pull/1793" }
gharchive/pull-request
Update CO2 dataset Minor changes in GCB metadata. Fix ~issue with African consumption-based emissions, and~(this was handled in a previous PR) issue with Palau emissions. Update owid_co2 dataset. Archive unused steps (and update country_profiles dependencies to use the latest GCB dataset). Hey @lucasrodes I'm going to merge this PR to avoid blocking other things. There aren't any big changes, but please have a look whenever you have a few minutes, thanks.
2025-04-01T06:39:56.942200
2021-04-20T07:06:14
862488863
{ "authors": [ "kulmann", "pascalwengerter", "wkloucek" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9450", "repo": "owncloud/ocis", "url": "https://github.com/owncloud/ocis/pull/1938" }
gharchive/pull-request
Release web 3.0.0 ~DO NOT MERGE This is a WIP PR bringing web-3.0.0, but it's still at the web-v3.0.0-rc2. If it looks good in oCIS we'll release web-v3.0.0 and update it in this PR again.~ It does now feature the official web-v3.0.0 release Description This PR pulls the assets of the web-v3.0.0 release and updates the accounts and settings service according to the recent changes in the owncloud design system v6.0.1. Related Issue Fixes https://github.com/owncloud/ocis/issues/1927 Motivation and Context Bring the new web ui to its most recent version. How Has This Been Tested? CI Types of changes [x] New feature (non-breaking change which adds functionality) Checklist: [x] Code changes needs a rebase after #1941 in order to get the pipeline green needs a rebase after #1941 in order to get the pipeline green I'll take care, thanks 🤝
2025-04-01T06:39:56.949927
2022-05-16T06:50:03
1236701959
{ "authors": [ "mmattel", "wkloucek" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9451", "repo": "owncloud/ocis", "url": "https://github.com/owncloud/ocis/pull/3802" }
gharchive/pull-request
Bugfix: Fix multiple configuration environment variables for the storage-users extension Description We've fixed multiple environment variable configuration options for the storage-users extension: STORAGE_USERS_GRPC_ADDR was used to configure both the address of the http and grpc server. This resulted in a failing startup of the storage-users extension if this config option is set, because the service tries to double-bind the configured port (one time for each of the http and grpc server). You can now configure the grpc server's address with the environment variable STORAGE_USERS_GRPC_ADDR and the http server's address with the environment variable STORAGE_USERS_HTTP_ADDR STORAGE_USERS_S3NG_USERS_PROVIDER_ENDPOINT was used to configure the permissions service endpoint for the S3NG driver and was therefore renamed to STORAGE_USERS_S3NG_PERMISSIONS_ENDPOINT It's now possible to configure the permissions service endpoint for all storage drivers with the environment variable STORAGE_USERS_PERMISSION_ENDPOINT, which was previously only used by the S3NG driver. WARNING: this could be considered a breaking change Related Issue Needed for https://github.com/owncloud/ocis-charts/pull/43 to start the storage-users http and grpc servers listening on all interfaces Motivation and Context fix the config How Has This Been Tested? locally Screenshots (if appropriate): Types of changes [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [x] Breaking change (fix or feature that would cause existing functionality to change) [ ] Technical debt [ ] Tests only (no source changes) Checklist: [x] Code changes [ ] Unit tests added [ ] Acceptance tests added [ ] Documentation ticket raised: @micbar is this considered a breaking change? For the STORAGE_USERS_S3NG_USERS_PROVIDER_ENDPOINT change, we could remain backwards compatible. All others are not breaking (STORAGE_USERS_GRPC_ADDR can not be configured by anyone currently because the service refuses to start) I guess that this will change the yaml/env file output - therefore docs relevant. Just hooking in so we can trigger a docs build.
2025-04-01T06:39:56.968250
2016-03-28T13:45:09
143980945
{ "authors": [ "NDuma", "jeff-hykin", "owocki", "rmendes900", "rubik" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9452", "repo": "owocki/pytrader", "url": "https://github.com/owocki/pytrader/issues/20" }
gharchive/issue
add more data as inputs volume bid/ask spread RSI/MACD / other derivative metrics social sentiment analysis ref: #13 #8 I've used Ichimocku in my manual trading. They are very powerful. Have you looked at all into using Social Mention or Google Alerts for social sentiment analysis? Sifting and reacting to viral bursts of social approval could provide large gains with minimal trading. Nice @jeff-hykin , I didnt check the other issues. I will start to follow all of them in here. @owocki I already had filled the form, will you create a a group or we will be here by now? @rmendes900 just sent your invite (Just saving this for future use) Heres a potential data source @darcy mentioned on slack https://www.quandl.com/data/BCHAIN?keyword=bitcoin Quandl looks super easy to integrate, there's also a Python library: https://www.quandl.com/help/python https://github.com/owocki/pytrader/pull/75 https://github.com/owocki/pytrader/pull/76
2025-04-01T06:39:56.970473
2024-04-20T22:33:50
2254746405
{ "authors": [ "oxalica", "timotheyca" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9453", "repo": "oxalica/async-ffi", "url": "https://github.com/oxalica/async-ffi/pull/22" }
gharchive/pull-request
Remove FfiWaker from RUST_WAKER_VTABLE Only FfiWakerBase is used by vtable functions (as it's the stable ABI) In general, the pointer isn't guaranteed to (and is expected, at times, not to) point to FfiWaker Sorry for the late response. Thanks!
2025-04-01T06:39:56.974846
2023-12-17T22:32:20
2045403517
{ "authors": [ "Dunqing", "camc314" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9454", "repo": "oxc-project/oxc", "url": "https://github.com/oxc-project/oxc/pull/1712" }
gharchive/pull-request
feat(linter) no double comparisons https://rust-lang.github.io/rust-clippy/master/index.html#/double_comparisons Current dependencies on/for this PR: main PR #1710 PR #1712 👈 This stack of pull requests is managed by Graphite. It looks like we can make this rule support auto-fix
2025-04-01T06:39:57.012633
2024-01-24T11:03:05
2098016460
{ "authors": [ "oysteijo" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9455", "repo": "oysteijo/simd_neuralnet", "url": "https://github.com/oysteijo/simd_neuralnet/pull/68" }
gharchive/pull-request
Refactorization of the optimizer code. This cleans some code around the optimizer. I breaks the API. See #67 . OK! It is a bit messy with the build system. But so be it!
2025-04-01T06:39:57.013804
2023-12-14T12:04:08
2041559023
{ "authors": [ "k-wall", "ozangunalp" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9456", "repo": "ozangunalp/kafka-native", "url": "https://github.com/ozangunalp/kafka-native/pull/127" }
gharchive/pull-request
Improve Kerberos auth test stability Change KerberosContainer wait strategy from log to 88 port listen Remove keytabs created in test-resources, the test now copies them to the target/test-classes/kerberos dir. Fixes #121 @ozangunalp lgtm. thank you for the investigation and producing the fix.
2025-04-01T06:39:57.026678
2020-04-27T17:00:42
607704841
{ "authors": [ "denisVitonis", "ozanmora" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9457", "repo": "ozanmora/ci_log_query", "url": "https://github.com/ozanmora/ci_log_query/issues/1" }
gharchive/issue
The Ci_query_log don´t save operations logs in action UPDATE, DELETE, INSERT Hi ozanmora First congratulations on the creation of HOOK. I implanted in my system this hook and it worked perfectly, but the actions of UPDATE, INSERT and DELETE are not recorded in the log file. Is it something of a config in Codeigniter? See u ozanmora Hello @denisVitonis , I noticed the same problem. I have no idea where exactly the reason. But I am thinking of updating this code. When I solve the issue, I will note here. If you can find a solution to this issue, I would be glad if you can share it with me. Thanks Dear Ozan, I managed to solve it in another way, however using your hook. First and I created a helper. with the function ////////////CODE IN helper FILE***////////////////////////// function log_queries_2($sql) { $filepath = APPPATH . 'logs/Query-log-' . date('Y-m-d') . '.php'; $handle = fopen($filepath, "a+"); fwrite($handle, $sql." \n Execution Time: ".date("Y-m-d H:i:s")."\n\n"); fclose($handle); } //////////////**************************/////////////////// After this go to system/database/DB_driver.php and find function named "query" and put this below code on top inside the function. It may not be the best of good practices but it works perfectly. ////////////CODE IN DB_driver.php FILE***////////////////////////// log_queries_2($sql);' //////////////**************************/////////////////// Dear Ozan, I managed to solve it in another way, however using your hook. First and I created a helper. with the function ////////////CODE IN helper FILE***////////////////////////// function log_queries_2($sql) { $filepath = APPPATH . 'logs/Query-log-' . date('Y-m-d') . '.php'; $handle = fopen($filepath, "a+"); fwrite($handle, $sql." \n Execution Time: ".date("Y-m-d H:i:s")."\n\n"); fclose($handle); } //////////////**************************/////////////////// After this go to system/database/DB_driver.php and find function named "query" and put this below code on top inside the function. It may not be the best of good practices but it works perfectly. ////////////CODE IN DB_driver.php FILE***////////////////////////// log_queries_2($sql);' //////////////**************************/////////////////// this solution is not the right approach. Because if you want to be able to update Codeigniter, you should never make changes under the system folder. I have made some code fixes but this problem is still not resolved. I'm still investigating this problem. If I can find a solution, I want to update it as soon as possible. I solved this problem. Actually, this was not an error. The code runs correctly and actually writes INSERT, UPDATE, DELETE queries to the log file. However, this code only works on "post_controller" (Called immediately after your controller is fully executed). You can read the details of this from the link. You are probably doing a redirect after doing INSERT, UPDATE, DELETE operations like me. The controller cannot be completed if it contains redirecting. No database queries can be written to the log file because the operation could not be completed. P.S.: My English is not very good. I apologize for this.
2025-04-01T06:39:57.031828
2023-10-14T13:19:45
1943257992
{ "authors": [ "Arkhein6", "ozdemirburak" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9458", "repo": "ozdemirburak/full-name-generator", "url": "https://github.com/ozdemirburak/full-name-generator/issues/13" }
gharchive/issue
Add Polish Names and Surnames The repository is missing Polish names and surnames. Follow the guide below to add them. 🚀 How to Contribute: Fork the repository. Add names and surnames for your country following the format provided below. Submit a Pull Request with your changes. Please include a reliable source for the names and surnames you're adding, preferably a public database or a reputable website. 📄 Format: For names (src/names/pl.ts): const polishNames = { 0: [ // Male names - Add the URL of the names source here 'Name1', 'Name2', 'Name3', ... ], 1: [ // Female names - Add the URL of the names source here 'Name1', 'Name2', 'Name3', ... ] } export default polishNames; For surnames (src/surnames/pl.ts): const polishSurnames = [ // Add the URL of the surnames source here 'Surname1', 'Surname2', 'Surname3', ... ]; export default polishSurnames; 📌 Important Notes: Ensure that the names and surnames you add are common and not specific to a small group. 50 names for both males and females, and 50 surnames. Organize the names with 10 entries per row. Avoid adding names that might be offensive or inappropriate. Ensure you're not violating any copyright or data privacy rules. i would like to work on this issue Sure, actually, there's no need to ask. As long as anyone follows the guidelines, I'm OK with merging any pull request.
2025-04-01T06:39:57.039530
2020-10-04T21:55:50
714410855
{ "authors": [ "asad-awadia", "ozeidan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9459", "repo": "ozeidan/fuzzy-patricia", "url": "https://github.com/ozeidan/fuzzy-patricia/issues/1" }
gharchive/issue
Does fuzzy matching work? Hey I am trying to get the fuzzy matching part working but it seems to not be giving me any results. I have<PHONE_NUMBER> and<PHONE_NUMBER> in the Trie and I want the query of 123 656 to return me those two numbers but instead it returns nil. Am I misunderstanding what fuzzy searching is here? @ozeidan Hello. So I am not sure if this implementation adheres to the normal definition of fuzzy matching but my definition is: characters of the search term have to occur in the matching string in the same order, but not necessarily in one piece. So searching for grch will match gosearch, but grhc won't. I am not quite sure about the definition of fuzzy matching according to your examples. Should the query match those strings because all the characters in the query are also present in the strings? @ozeidan Fuzzy matching to me would be that the input query is 'close enough' to the strings in the tree the 'edit distance' isn't too far out If the trie has<PHONE_NUMBER> then search query<PHONE_NUMBER> should return the string - because it is fuzzily the same but<PHONE_NUMBER> should not - cause its really far apart Does that make sense? Ah I see what you mean. So you are talking about something like the Levenshtein distance? So as of now the 'fuzzy' matching here works in a very different way (maybe we should call it something else then). I'll try to think about how we could implement the Levenshtein distance (or some similar heuristic). We'll have to find something that works well with the trie structure, as far as I remember the performance of the search depends on being able to eliminate a lot of branches of the trie very early on.
2025-04-01T06:39:57.042912
2023-04-29T23:37:04
1689719268
{ "authors": [ "leandroschabarum" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9460", "repo": "ozmap/ozlogger", "url": "https://github.com/ozmap/ozlogger/pull/4" }
gharchive/pull-request
Release/0.1.2 Release notes Addition of time() and timeEnd() methods for measuring code execution times. Added formatting options for text and json outputs. Changed configurations for init() method in order to give more flexibility when choosing the output target and their formats. Log levels can be updated at runtime to increase/decrease logging verbosity with the OZLOGGER_LEVEL environment variable. Colored output must be enabled with the OZLOGGER_COLORS environment variable. Resolves #3
2025-04-01T06:39:57.066149
2019-01-27T13:37:03
403551709
{ "authors": [ "BenTaub", "p1c2u", "strongbugman", "timoilya" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9461", "repo": "p1c2u/openapi-spec-validator", "url": "https://github.com/p1c2u/openapi-spec-validator/issues/54" }
gharchive/issue
jsonschema 3.0+ support Hi, the jsonschema has been updated to 3.0+, and maybe has some break changes: .eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/__init__.py:7: in <module> from openapi_spec_validator.factories import JSONSpecValidatorFactory .eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/factories.py:5: in <module> from openapi_spec_validator.generators import ( .eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/generators.py:12: in <module> class SpecValidatorsGeneratorFactory: .eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/generators.py:19: in SpecValidatorsGeneratorFactory 'properties': _validators.properties_draft4, E AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4' It was fixed with #590.2.5 and version 0.2.5 I have same problem on python3.6.7 openapi-spec-validator==0.2.6 jsonschema==3.0.1 MacBook-Pro-ilya:projects ilya$ pip -V pip 19.0.3 from /Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/pip (python 3.6) MacBook-Pro-ilya:projects ilya$ pip show openapi_spec_validator Name: openapi-spec-validator Version: 0.2.6 Summary: UNKNOWN Home-page: https://github.com/p1c2u/openapi-spec-validator Author: Artur Maciag Author-email<EMAIL_ADDRESS>License: Apache License, Version 2.0 Location: /Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages Requires: pathlib, PyYAML, jsonschema, six Required-by: MacBook-Pro-ilya:projects ilya$ pip show jsonschema Name: jsonschema Version: 3.0.1 Summary: An implementation of JSON Schema validation for Python Home-page: https://github.com/Julian/jsonschema Author: Julian Berman Author-email<EMAIL_ADDRESS>License: UNKNOWN Location: /Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages Requires: setuptools, pyrsistent, six, attrs Required-by: openapi-spec-validator, jsonmerge MacBook-Pro-ilya:projects ilya$ /Users/ilya/.pyenv/versions/3.6.7/bin/python Python 3.6.7 (default, Mar 13 2019, 14:00:09) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1<IP_ADDRESS>)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from openapi_spec_validator import validate_v3_spec Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/__init__.py", line 7, in <module> from openapi_spec_validator.factories import JSONSpecValidatorFactory File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/factories.py", line 5, in <module> from openapi_spec_validator.generators import ( File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/generators.py", line 12, in <module> class SpecValidatorsGeneratorFactory: File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/generators.py", line 19, in SpecValidatorsGeneratorFactory 'properties': _validators.properties_draft4, AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4' >>> @strongbugman , do you still have that problem? @timoilya openapi-spec-validator 0.2.6 has requirement jsonschema<3, but you'll have jsonschema 3.0.1 which is incompatible. @strongbugman , this ussue is closed, but jsonschema >3 is not supported in version 0.2.5 https://github.com/p1c2u/openapi-spec-validator/issues/54#issuecomment-467098215 I am having the same issue - openapi-spec-validator 0.2.6 and jsonschema 3.0.1 delivering the following error: AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4' Does @strongbugman 's comment mean I should step the spec validator back to a version lower than 0.2.5? Yes, or you can create a PR to fix the compatibility problem Ben<EMAIL_ADDRESS>于2019年4月14日周日 上午6:56写道: I am having the same issue - openapi-spec-validator 0.2.6 and jsonschema 3.0.1 delivering the following error: AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4' Does @strongbugman https://github.com/strongbugman 's comment mean I should step the spec validator back to a version lower than 0.2.5? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/p1c2u/openapi-spec-validator/issues/54#issuecomment-482896243, or mute the thread https://github.com/notifications/unsubscribe-auth/APXibQXnDJcVpCpN9v7i65wEQk-FwdFEks5vgmCNgaJpZM4aUuL6 . @strongbugman , please review PR https://github.com/p1c2u/openapi-spec-validator/pull/72 if you may to approve it
2025-04-01T06:39:57.104626
2016-08-21T22:01:36
172343982
{ "authors": [ "dwarring" ], "license": "Artistic-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9462", "repo": "p6-pdf/perl6-PDF", "url": "https://github.com/p6-pdf/perl6-PDF/issues/8" }
gharchive/issue
This module has gone Glacial - on the back-burner This module was mostly built using the pre 2016 ufo build-tool, which is no longer available. It's now reliant on rakudo 2016.xx precompilation, which isn't yet fast to load or run. Now takes quite a few minutes to run the test-suite, with most of the time spent in precompilation and/or loading. At this stage, I'm only regressing this module occasionally. There doesn't seem to be a fair bit of scope for optimization both in rakudo and within this module. May look at this again towards the end of 2016 or early in 2017. Running much better. Typically taking ~90 sec to run test suite on latest Rakudo 2016.11. Picking it up again :-)
2025-04-01T06:39:57.146893
2023-02-25T21:55:26
1599853507
{ "authors": [ "pacak", "ysndr" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9463", "repo": "pacak/bpaf", "url": "https://github.com/pacak/bpaf/pull/172" }
gharchive/pull-request
Positional bool Fixes #171 @ysndr, can you take a look if this does what you want? If it is - I'll make a new release. Todo: [ ] Add anywhere top level annotation [ ] Add box top level annotation [ ] Add catch for anywhere and use it in failing test cases [ ] update documentation seems so, found a different issue though: remember that i try to parse --setBool <key> <value>? #[derive(Debug, Clone, Bpaf)] #[bpaf(adjacent)] #[allow(unused)] pub struct ConfigSetBool { /// Set <key> to <bool> #[bpaf(long("setBool"))] set_bool: (), /// Configuration key #[bpaf(positional("key"))] key: String, /// Configuration Value (bool) #[bpaf(positional("bool"))] // << this seems to work now, hurray :) value: bool, } I use this ^ contraption for that. if i run xyz config --setBool key notabool I'm greeted with ERROR: --setBool is not expected in this context configure user parameters Which yes, tells me somewhat that it cant parse --setBool. but the way it does does not really help me to do it right. -- what i would expect is it saying which (positional) argument failed to parse. This also seems to happen with when value: u32 but I'm not sure if that is a regression maybe adjacent makes it so blocks must stick together, you also need anywhere pub struct ConfigSetBool { /// Set <key> to <bool> #[bpaf(long("setBool"))] set_bool: (), /// Configuration key #[bpaf(positional("key"))] key: String, /// Configuration Value (bool) #[bpaf(positional("bool"))] // << this seems to work now, hurray :) value: bool, } fn try_this() -> impl Parser<ConfigSetBool> { config_set_bool().anywhere() } If this works - I'll add anywhere to a derive macro no, that doesnt change anything as far as i can tell Unrelated,... wouldnt it be great if i could write fn try_this<T: ToParser>() -> impl Parser<T> { T::to_parser().anywhere() } instead of the same function three times ;) Hmm... It sort of works for me use bpaf::*; #[derive(Debug, Clone, Bpaf)] #[bpaf(adjacent)] #[allow(unused)] pub struct ConfigSetBool { /// Set <key> to <bool> #[bpaf(long("setBool"))] set_bool: (), /// Configuration key #[bpaf(positional("key"))] key: String, /// Configuration Value (bool) #[bpaf(positional("bool"))] // << this seems to work now, hurray :) value: bool, } fn main() { let x = config_set_bool().anywhere().many().to_options().run(); todo!("{:?}", x); } elakelaiset% cargo run --release --example set_bool -- --setBool banana false --setBool durian true Finished release [optimized] target(s) in 0.02s Running `target/release/examples/set_bool --setBool banana false --setBool durian true` thread 'main' panicked at 'not yet implemented: [ConfigSetBool { set_bool: (), key: "banana", value: false }, ConfigSetBool { set_bool: (), key: "durian", value: true }]', examples/set_bool.rs:21:5 try --setBool banana 123 Hmm... I see, I would call this a bug in anywhere, it should ignore missing arguments but should retain parse errors. Will fix, might take a bit. instead of the same function three times ;) Without functional dependencies or type families it seems hard to implement the trait constraint ToParser... I blame Rust :) I blame Rust :) fair enough :D Though I'd be perfectly happy if deriving Bpaf for something would implement a trait for them that just exposes their derived parser as a Box<dyn Parser>.. Ive been doing that manually so far but having access to that directly would be handy in such cases... try --setBool banana 123, parsing correct values works for me too Pushed something. Seems to work for me as expected with invalid values. I'll have to add some tests and possibly deal with more corner cases so release might take a bit. Though I'd be perfectly happy if deriving Bpaf for something would implement a trait for them that just exposes their derived parser as a Box.. Hmm... Currently you can get a boxed parser with construct!(parser), I can also expose it as method and add the method to top level bpaf annotation... #[derive(Debug, Clone, Bpaf)] #[bpaf(adjacent, anywhere, box)] struct Foo { ... } in this case will give you fn foo() -> Box<dyn Parse>... Any feedback on the article? Back in 40 minutes. well i kinda just want to have a generic way to get to the derived parser, what i currently use is a macro like this: macro_rules! parseable { ($type:ty, $parser:ident) => { impl crate::commands::package::Parseable for $type { fn parse() -> bpaf::parsers::ParseBox<Self> { let p = $parser(); bpaf::construct!(p) } } }; } that i just run manually for every type where i would want that capability.. it gets me what i want but if the Bpaf derive macro could do that automatically would just be more convenient. Any feedback on the article? reading now Any feedback on the article? shall I review there, or do you want to discuss outside? shall I review there, or do you want to discuss outside? Up to you, I'm available at discord, whatsapp, google chat and signal. Probably google chat is going to be the easiest if you use it<EMAIL_ADDRESS> google chat Oh they try it again? :D I'll reach out tomorrow I guess or commit the review comments first, its not quite the middle of the day for me, quite the opposite Though it is interesting to see this project unravelled a bit 👍🏼 Though I'd be perfectly happy if deriving Bpaf for something would implement a trait for them that just exposes their derived parser as a Box.. Hmm... Currently you can get a boxed parser with construct!(parser), I can also expose it as method and add the method to top level bpaf annotation... #[derive(Debug, Clone, Bpaf)] #[bpaf(adjacent, anywhere, box)] struct Foo { ... } in this case will give you fn foo() -> Box<dyn Parse>... Oh in don't know if were talking of different things then.. I mean something like <Foo as AsParseBox>::parse_box() -> Box<dyn Parser> related to https://github.com/pacak/bpaf/pull/170, contains breaking changes in behavior Sorry this will be a bit of a brain dump i figured after writing this up. tl;dr #[positional] filed: bool seems to work, error messages now talk about the value part not something else. Nice progress! Errors are something else, give it a read, I'll make it a seperate issue not to block this one. So with #[derive(Bpaf, Debug)] struct Xyz { #[bpaf(long, switch)] bool_flag: bool, #[bpaf(long, argument)] bool_opt: bool, #[bpaf(positional)] bool_arg: bool, } these work as expected: $ cargo run -- --bool-opt true true --bool-flag $ cargo run -- --bool-opt true --bool-flag true $ cargo run -- --bool-opt true true these fail as expected: $ cargo run -- --bool-opt true nottrue Couldn't parse "nottrue": provided string was not `true` or `false` ^ ^ ^ ^ ^ ^ ' ' '----'----'-----' '-------'- why are these not the same? -' with #[derive(Debug, Clone, Bpaf)] #[bpaf(adjacent)] #[allow(unused)] pub struct ConfigSetBool { /// Set <key> to <bool> #[bpaf(long("setBool"))] set_bool: (), /// Configuration key #[bpaf(positional("key"))] key: String, /// Configuration Value (bool) #[bpaf(positional("bool"))] // << this seems to work now, hurray :) value: bool, } fails correctly $ cargo run -- --setBool key tru Couldn't parse "tru": provided string was not `true` or `false` boxed seems to work for what i can tell, even though it might not be what i proposed hh UX discussion I think this better go into a separate issue but i put it here for context $ cargo run -- --bool-opt tru tru Couldn't parse "tru": provided string was not `true` or `false` | L use positive sentence instead \ which tru (see below)? $ cargo run -- --bool-opt true true --bool-fla No such flag: `--bool-fla`, did you mean `--bool-flag`? L differnet $ cargo run -- true --bool-flag --bool-flag --bool-opt true true --bool-flag is not expected in this context | \ another one (this time without ticks at all :p) the error message should if possible show which argument went wrong $ cargo run -- --bool-opt tru true Invalid argument: got: --bool-opt tru expected: --bool-opt (true | false) Type '<command> <subcommand> --help' to see all arguments $ cargo run -- --bool-opt true true --bool-fla No such flag: got: --bool-fla expected: [ --bool-flag ] < Only if --bool-flag is not already specified Type '<command> <subcommand> --help' to see all arguments cargo run -- true --bool-flag --bool-flag --bool-opt true true No such flag: got: --bool-flag expected: (none) Type '<command> <subcommand> --help' to see all arguments or even better: $ cargo run -- true --bool-flag --bool-flag --bool-opt true true Duplicate flag: got: --bool-flag --bool-flag expected: [ --bool-flag ] Type '<command> <subcommand> --help' to see all arguments Though using positionals for multi value options is still a bit awkward and looks like a hack in the help: Usage: [--bool-flag] --bool-opt ARG <ARG> --setBool <key> <bool> Available positional items: <key> Configuration key <---, <bool> Configuration Value (bool) <---, , , Available options: , --bool-flag , --bool-opt <ARG> <-- FWIW pushed a branch that renders proper help for multivariable arguments - multiarg Positional bool is now supported, For purely positional items something like this should work in the recent master, adjacent is not needed. I also made some changes to make error messages more user friendly #[derive(Debug, Clone, Bpaf)] #[bpaf(anywhere, box)] struct Foo { /// Set <key> to <bool> #[bpaf(long("setBool"))] set_bool: (), /// Configuration key #[bpaf(positional("key"))] key: String, /// Configuration Value (bool) #[bpaf(positional("bool"))] // << this seems to work now, hurray :) value: bool, } Took out UX discussion parts in a separate issue. going to merge this.
2025-04-01T06:39:57.177751
2023-09-28T04:57:42
1916699724
{ "authors": [ "shibumi", "wetterjames4" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9464", "repo": "package-url/packageurl-go", "url": "https://github.com/package-url/packageurl-go/pull/64" }
gharchive/pull-request
Qualifier values do not have first character converted to lowercase FromString panics for some inputs. This change fixes the issue and adds a test to ensure it works. I think we can close this PR here, because the commits are already in: https://github.com/package-url/packageurl-go/pull/65
2025-04-01T06:39:57.196074
2022-05-18T10:09:04
1239729415
{ "authors": [ "TomasTomecek" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9465", "repo": "packit/hardly", "url": "https://github.com/packit/hardly/pull/67" }
gharchive/pull-request
DistGitMRHandler: fetch tags from upstream source-git We clone source-git fork by default - it does not need to have upstream tags that are needed in the update-dist-git process. This commit fetches tags from upstream (the repo against the MR is opened) after the repo is cloned (initialization of LocalProject). Fixes https://github.com/packit/hardly/issues/61 RELEASE NOTES BEGIN When a dist-git MR is being created, hardly now fetches tags from the upstream source-git repo as they may not be present in contributor's fork. RELEASE NOTES END RELEASE NOTES BEGIN Do we already want to include hardly changes in blog posts? We haven't so far. We should start doing that at some point. Especially when we get people who will use hardly and rely on it :)
2025-04-01T06:39:57.226467
2017-02-15T00:22:38
207675822
{ "authors": [ "ccprog", "davedevelopment" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9466", "repo": "padraic/mockery", "url": "https://github.com/padraic/mockery/issues/697" }
gharchive/issue
Use existing __toString methods for argument and Matcher representation If an error prints the arguments of an expectation, expected or actual, it represents them via the Mockery::formatArgument() method. For objects, this will return a simple object(className). This is unneccessarily obscure when the object has a __toString() method. The Matcher interface for example requires them. @ccprog thanks for reporting, see #698 for a partial fix.
2025-04-01T06:39:57.240373
2014-07-24T08:31:46
38607546
{ "authors": [ "bweston92", "saschadube" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9467", "repo": "pagekit/pagekit", "url": "https://github.com/pagekit/pagekit/issues/111" }
gharchive/issue
Form builder Is there a form builder that can be used in extensions for creating forms? Something like https://github.com/illuminate/html We released the Pagekit Beta today. I close this issue because the code base completely changed. Please open a new issue if it still exists.
2025-04-01T06:39:57.252207
2017-04-02T14:02:02
218766426
{ "authors": [ "mhaagens", "mikinho", "oschaaf" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9469", "repo": "pagespeed/ngx_pagespeed", "url": "https://github.com/pagespeed/ngx_pagespeed/issues/1401" }
gharchive/issue
ngx_pagespeed not working with reverse proxy I can't get ngx_pagespeed to work when reverse proxying node.js. It works when just serving files normally, so is there something I'm missing here? server { listen 80 default_server; listen [::]:80 default_server; server_name _; pagespeed on; pagespeed FileCachePath /var/cache/ngx_pagespeed/; pagespeed RewriteLevel PassThrough; pagespeed EnableCachePurge on; pagespeed PurgeMethod PURGE; pagespeed EnableFilters prioritize_critical_css; location /assets/ { expires 10d; alias /var/www/demo/assets/; } location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; } location ~ "^/pagespeed_static/" { } location ~ "^/ngx_pagespeed_beacon$" { } } Could you share how you tested if ngx_pagespeed is working? Ensure you are not performing any gzip or other compression from within Node.js.
2025-04-01T06:39:57.272780
2021-06-21T14:02:53
926251738
{ "authors": [ "alazymeme", "zneix" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9470", "repo": "pajbot/pajbot", "url": "https://github.com/pajbot/pajbot/pull/1297" }
gharchive/pull-request
Migrated to v2 emote cdn endpoint Pull request checklist: [x] CHANGELOG.md was updated, if applicable [x] Documentation in docs/ or install-docs/ was updated, if applicable Using v2 paths is required for animated emotes, as can be seen in the following example. Example of an animated emote: https://static-cdn.jtvnw.net/emoticons/v1/emotesv2_e0dd54510bc94631899bf64b097680a2/3.0 https://static-cdn.jtvnw.net/emoticons/v2/emotesv2_e0dd54510bc94631899bf64b097680a2/default/dark/3.0 https://static-cdn.jtvnw.net/emoticons/v2/emotesv2_e0dd54510bc94631899bf64b097680a2/static/dark/3.0 default can be replaced with static to always get a non-animated variant I also moved already hardcoded emotes in the source in case v1 endpoint gets shut down later.
2025-04-01T06:39:57.274524
2021-05-17T18:46:54
893602285
{ "authors": [ "fg-j", "thitch97" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9471", "repo": "paketo-buildpacks/go", "url": "https://github.com/paketo-buildpacks/go/issues/427" }
gharchive/issue
Release Paketo Go buildpack Release in the week of May 17th if relevant changes or dependency updates are merged. What steps did you take to close this issue? What resources did you use? How long did you spend on this task this week? Answer in a comment. Released v0.7.0 In an effort to de-clutter the project board, we are moving away from recurring issues such as this one. This task will instead be added to a checklist of tasks to be completed on a weekly basis.
2025-04-01T06:39:57.277492
2021-03-05T16:08:01
823222936
{ "authors": [ "arjun024", "ryanmoran", "thitch97" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9472", "repo": "paketo-buildpacks/nodejs", "url": "https://github.com/paketo-buildpacks/nodejs/issues/344" }
gharchive/issue
Nominate Emily Johnson (@emmjohnson) as Nodejs Contributor In accordance with the Paketo Buildpacks Governance document, I am nominating Emily Johnson (@emmjohnson) as a contributor to the Nodejs language family. Emily is a regular contributor to the buildpack family (https://github.com/paketo-buildpacks/nodejs/pull/321, https://github.com/paketo-buildpacks/node-engine/pull/240 etc.). +1 With supermajority vote of maintainers in the affirmative, the nomination is considered to be approved. Welcome to the team of Nodejs Contributors @emmjohnson! @paketo-buildpacks/steering-committee Could you please add Emily to the contributors team on github? @emmjohnson Congrats! @arjun024 Done!
2025-04-01T06:39:57.282413
2020-08-23T06:35:40
684126945
{ "authors": [ "badri", "fg-j" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9473", "repo": "paketo-buildpacks/php", "url": "https://github.com/paketo-buildpacks/php/issues/253" }
gharchive/issue
Buildpack support for Drupal 8 I'm attempting to build Drupal 8 using PHP buildpacks. Here's a breakdown of the steps I'm doing. Scaffold Drupal 8. composer create-project "drupal/recommended-project:^8" drupal Add the following buildpack.yml. --- php: version: 7.4.* webserver: nginx webdirectory: web Build the container image. pack build -b gcr.io/paketo-buildpacks/php drupal-8 --builder paketobuildpacks/builder:full Run the new image. docker run --interactive --tty --env PORT=8080 --publish 8080:8080 drupal-8 The trouble is, the build process creates a symlink of the vendor directory, and running composer install post that updates the autoload.php thus: <?php /** * @file * Includes the autoloader created by Composer. * * This file was generated by drupal-scaffold. *. * @see composer.json * @see index.php * @see core/install.php * @see core/rebuild.php * @see core/modules/statistics/statistics.php */ return require __DIR__ . '//layers/paketo-buildpacks_php-composer/php-composer-packages/vendor/autoload.php'; Which breaks the autoload sequence. When I edit it back to what it was, <?php /** * @file * Includes the autoloader created by Composer. * * This file was generated by drupal-scaffold. *. * @see composer.json * @see index.php * @see core/install.php * @see core/rebuild.php * @see core/modules/statistics/statistics.php */ return require __DIR__ . '/../vendor/autoload.php'; It works fine. I am not sure why we create symlinks and then run composer install again. Copying the vendor directory instead of symlinking it would help, although there might be some rationale behind symlinking it which I'm not aware of. Running composer install after symlinking updates the autoload.php files to reflect the new location of vendor directory. Happy to triage any approaches/fixes and contribute back to the buildpack, and thanks for the awesome work. @paketo-buildpacks/php-maintainers This has been open for a bit. Any update on this? Does the workaround described in the replies to #366 also apply to this use case?
2025-04-01T06:39:57.286015
2022-07-01T21:14:06
1291835790
{ "authors": [ "joshuatcasey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9474", "repo": "paketo-buildpacks/poetry", "url": "https://github.com/paketo-buildpacks/poetry/pull/63" }
gharchive/pull-request
Remove dependencies Demonstration implementation of paketo-buildpacks/rfcs#214 SBOM now generated from installation directory Will no longer reuse layers Installed version now retrieved using 'poetry --version' Only exact version matching is supported Checklist [x] I have viewed, signed, and submitted the Contributor License Agreement. [x] I have linked issue(s) that this PR should close using keywords or the Github UI (See docs) [x] I have added an integration test, if necessary. [x] I have reviewed the styleguide for guidance on my code quality. [x] I'm happy with the commit history on this PR (I have rebased/squashed as needed). Closing in favor of #75
2025-04-01T06:39:57.294090
2021-04-08T17:28:12
853692161
{ "authors": [ "fg-j", "genevieve", "robdimsdale", "ryanmoran" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9475", "repo": "paketo-buildpacks/ruby", "url": "https://github.com/paketo-buildpacks/ruby/issues/567" }
gharchive/issue
Set RAILS_LOG_TO_STDOUT to true/1 What happened? What were you attempting to do? Build a rails app that sends logs to stdout What did you expect to happen? I expected RAILS_LOG_TO_STDOUT to be enabled so that our environment config could pick it up and use it to set the appropriate logger to direct logs to stdout for the container What was the actual behavior? Please provide log output, if possible. It is not enabled, the logs were directed to a file Build Configuration What platform (pack, kpack, tekton buildpacks plugin, etc.) are you using? Please include a version. What buildpacks are you using? Please include versions. What builder are you using? If custom, can you provide the output from pack inspect-builder <builder>? Can you provide a sample app or relevant configuration (buildpack.yml, nginx.conf, etc.)? Checklist [ ] I have included log output. [ ] The log output includes an error message. [ ] I have included steps for reproduction. @genevieve @paketo-buildpacks/ruby-maintainers This has been open for a bit. Is this still a need? Is there a workaround available? Its not a bug. Its a feature request. Users routinely need to set this environment variable so that they can have their logs streamed to stdout in the container rather than the default location of a file inside the container. The workaround is just to set that environment variable when starting the container, but they shouldn't need to as having the buildpack set it by default would be pretty obviously better. Makes sense. Until the buildpack sets this environment variable, users can get this behaviour today by setting BPE_RAILS_LOG_TO_STDOUT=true in the build environment. This would ensure that RAILS_LOG_TO_STDOUT=true automatically when the container starts (see docs). @robdimsdale I think this logic would go into the rails-assets buildpack. Its the most closely related, if not completely aligned with the intent of this feature. We'll want to set this variable as a default using https://pkg.go.dev/github.com/paketo-buildpacks/packit/v2#Environment.Default. This would allow users to still override this value if needed. I'm probably not going to get to this just yet, so anyone else who is interested is welcome to pick it up!
2025-04-01T06:39:57.302614
2018-07-30T04:28:18
345607150
{ "authors": [ "EternityForest", "atheros", "paladin-t" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9476", "repo": "paladin-t/my_basic", "url": "https://github.com/paladin-t/my_basic/issues/22" }
gharchive/issue
Eval a line of code without resetting the interpreter? So I was writing some bindings to control Arduino functions with this, and thought it would be really cool if you could do something suspending a running interpreter, evaluating a line of code, and then returning to normal execution. This would let you interact with a running program during development, change variables, etc, and would be great for using it like the early home computers that you used through a BASIC prompt. Is this possible with the current API? The interpreter is not designed to work as REPL, but it's possible to load and run without resetting the interpreter, eg: int main(int argc, char* argv[]) { struct mb_interpreter_t* bas = 0; mb_init(); mb_open(&bas); mb_load_string( bas, "n = n + 1\n" "print \"entry \", n;\n", false ); mb_load_string(bas, "a = 22", false); mb_run(bas, false); mb_load_string(bas, "b = 7", false); mb_run(bas, false); mb_load_string(bas, "c = a / b", false); mb_run(bas, false); mb_load_string(bas, "print c;", false); mb_run(bas, false); mb_close(&bas); mb_dispose(); return 0; } You would see each mb_run is a top-down execution with previous values reserved in the variables. Although this is not reentrant. It's also possible to inspect variables with the mb_debug_get and mb_debug_set API, if this was the only interaction you were looking for. Oh thanks! Those 2 APIs actually do cover 90% of what I'd like to do interactively. I started working on a fork to allow multiple parsing contexts and stacks per interpreter, so you can load multiple "threads" (probably with a python style GIL ) and do traditional reentrant REPL, but that's a fairly big project. Nice work! I appreciate your efforts, and I believe it would help others a lot. Here's the dev roadmap: Releasing current v1.2 Rewriting a new kernal as v2.0 So for the current branch, it's kinda feature-frozen. But it inspired me, I would consider adding REPL for v2.0. I prefer to keep this open so others will find. Awesome! I'll probably be following this project for a while, I'm using it to allow remote code updates on an open source IoT platform I'm doing, and I might try to port it to a handheld game console at some point. It's a great language! Definitely one of the easiest to embed interpreters out there, and it doesn't use too much RAM on embedded systems. Thanks! Looking forward to your sharing of your creations. it's such a shame there is no REPL :(
2025-04-01T06:39:57.335397
2023-03-13T13:39:59
1621511519
{ "authors": [ "letmeinkvar", "palantirtech" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9477", "repo": "palantir/eclipse-typescript", "url": "https://github.com/palantir/eclipse-typescript/pull/363" }
gharchive/pull-request
Testing, please ignore. ajmshv This is a bug bounty test. Please do not approve this! ajmshv Thanks for your interest in palantir/eclipse-typescript, @letmeinkvar! Before we can accept your pull request, you need to sign our contributor license agreement - just visit https://cla.palantir.com/ and follow the instructions. Once you sign, I'll automatically update this pull request.
2025-04-01T06:39:57.344924
2017-02-10T10:51:03
206765228
{ "authors": [ "Shahor", "ajafff" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9478", "repo": "palantir/tslint", "url": "https://github.com/palantir/tslint/issues/2197" }
gharchive/issue
Property inheritance from extended class Bug Report TSLint version: 0.19.1 TypeScript version: 2.1.5 Running TSLint via: nodejs The problem : Using : Angular2 I have class A and class B extends A In B.component.html I use a var story that is defined in class A. The linter doesn't like this (see result below) TypeScript code being linted with tslint.json configuration: { "rulesDirectory": [ "node_modules/codelyzer", "node_modules/tslint-eslint-rules/dist/rules" ], "rules": { "class-name": true, "comment-format": [ true, "check-space" ], "curly": true, "eofline": true, "forin": true, "indent": [ true, "spaces" ], "ter-indent": [ true, 4 ], "label-position": true, "max-line-length": [ true, 140 ], "member-access": false, "member-ordering": [ true, "static-before-instance", "variables-before-functions" ], "no-arg": true, "no-bitwise": true, "no-console": [ true, "debug", "info", "time", "timeEnd", "trace" ], "no-construct": true, "no-debugger": true, "no-duplicate-variable": true, "no-empty": false, "no-eval": true, "no-shadowed-variable": true, "no-string-literal": false, "no-switch-case-fall-through": true, "no-trailing-whitespace": true, "no-unused-expression": true, "no-use-before-declare": true, "no-var-keyword": true, "object-literal-sort-keys": false, "one-line": [ true, "check-open-brace", "check-catch", "check-else", "check-whitespace" ], "quotemark": [ true, "single" ], "radix": true, "semicolon": [ true, "never" ], "triple-equals": [ true, "allow-null-check" ], "typedef-whitespace": [ true, { "call-signature": "nospace", "index-signature": "nospace", "parameter": "nospace", "property-declaration": "nospace", "variable-declaration": "nospace" } ], "variable-name": false, "whitespace": [ true, "check-branch", "check-decl", "check-operator", "check-separator", "check-type" ], "directive-selector": [ true, "attribute", "cms", "camelCase" ], "component-selector": [ true, "element", "cms", "kebab-case" ], "use-input-property-decorator": true, "use-output-property-decorator": true, "use-host-property-decorator": true, "no-input-rename": true, "no-output-rename": true, "use-life-cycle-interface": true, "use-pipe-transform-interface": true, "component-class-suffix": true, "directive-class-suffix": true, "no-access-missing-member": true, "templates-use-public": true, "invoke-injectable": true } } Actual behavior src/app/story/titles-and-settings/titles-and-settings.component.html[12, 22]: The method "story" that you're trying to access does not exist in the class declaration. Expected behavior Linter knows about the inherited property "story" from the parent class. You're in the wrong repository. The issue you're searching for is over there: https://github.com/mgechev/codelyzer/issues/191 @ajafff Thank you sir, sorry for putting this in the wrong place :( I'm closing this one.
2025-04-01T06:39:57.350939
2017-08-28T16:46:59
253388089
{ "authors": [ "JoshuaKGoldberg", "ajafff", "dylanpyle", "paulTchaa8" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9479", "repo": "palantir/tslint", "url": "https://github.com/palantir/tslint/issues/3174" }
gharchive/issue
I don't see this behavior in the docs or referenced in any other issues — sorry if I missed something! I can confirm that this is a bug. What's happening here: the disabled range goes up to the start of the next line. If the failure begins at the first character in that line, the error is erroneously disabled. @ajafff Thanks for the quick fix! 🎉 Please help dears. When launching the ng new my-project , i got this. What could be the issue and how to solve that? I'm learning angular I've tried to install some other asked dependencies, with no effects.. @paulTchaa8 you'll want to file an issue on Angular on GitHub. That doesn't look like a TSLint issue.
2025-04-01T06:39:57.393597
2022-06-14T10:10:01
1270571767
{ "authors": [ "ThiefMaster", "northernSage" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9480", "repo": "pallets-eco/cachelib", "url": "https://github.com/pallets-eco/cachelib/issues/156" }
gharchive/issue
Redis add() without timeout expires immediately I fixed this in flask-caching some time ago, but it looks like it's broken here as well (and with flask-caching now relying on this, it's broken there again too): https://github.com/pallets-eco/flask-caching/pull/218 Thanks for letting me know, will write a fix this weekend :bowtie:
2025-04-01T06:39:57.400799
2019-05-02T20:52:58
439783230
{ "authors": [ "Cologler", "Ketzalkotal", "Xevion", "davidism", "jab", "xingheng" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9481", "repo": "pallets/click", "url": "https://github.com/pallets/click/issues/1287" }
gharchive/issue
option with bool type @click.command() @click.option('--shout/--no-shout', default=False) def info(shout): pass info() is ok, but @click.command() @click.option('--shout/--no-shout', default=False, type=bool) def info(shout): pass info() will raise a TypeError: Got secondary option for non boolean flag. you need to use click.BOOL instead of bool, as per the docs: https://click.palletsprojects.com/en/7.x/parameters/#parameter-types Oops, that's not right, sorry, deleting and will hopefully post a sensible response momentarily. I've tracked this down to https://github.com/pallets/click/blob/c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L1573 in this case type is not None, it's bool, so the condition is failing and self.is_bool_flag incorrectly gets set to False. I can't immediately see why this is there (it just looks wrong at first glance). But blame says that line of code is 5 years old, which suggests there's more to it... I am working on a click helper library (https://github.com/Cologler/click-anno-python), so I was did a lot of tests for click. I think this is there because the --no- option makes this a special type of boolean flag automatically, it wouldn't make sense to assign a type to it. I guess we could check if type is none or bool. Huh, I’d have thought it made sense if using the /--no- spelling implied type=bool, so you didn’t have to add that explicitly, but if you did, it would mean the same thing. Taking a look at this Checked the Discord, it looks like KP has got it Same here. type=click.BOOL repros this bug, too. Temporarily remove the type and resolved. The type argument only supports bool and not click.BOOL, should this be changed to support both? https://github.com/kporangehat/click/blob/c7bd8d47b44816c2999a12a0b4e52162e5c6994a/click/core.py#L1609
2025-04-01T06:39:57.406387
2019-08-29T19:17:02
487131617
{ "authors": [ "andersk", "davidism", "msmolens", "untitaker" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9482", "repo": "pallets/click", "url": "https://github.com/pallets/click/pull/1382" }
gharchive/pull-request
WIP: Set permissions for atomic open_file() Previously, when open_file() was called with atomic=True, the target file's permissions were always set to 0600, i.e. readable and writable by the current user. These permissions come from the temporary file created by tempfile.mkstemp(). This commit changes an atomic open_file() call to set the permissions of the target file. If the target file already exists, then its current permissions are retained. Otherwise, the permissions respect the current umask. Fixes #1376 I really appreciate the work you're putting into this, but this is way more complexity than I'm willing to add and support. Vendoring two copies of Python's tempfile module is not worth it. Does python-atomicwrites suggested in #320 have this issue? cc @untitaker @davidism Thanks for the review. I agree that vendoring the modules is heavy-handed and not a great way forward. I'll close this pull request for now. Note that python-atomicwrites also changes the permissions: https://github.com/untitaker/python-atomicwrites/issues/42 Atomicwrites does NOT intentionally change permissions. Different permissions come from a result of how the file write is done. I took a more minimalist stab at this myself in #1400. @msmolens I copied your tests from here; hope that’s fine. @untitaker You can certainly argue that it “does not change permissions” because it internally works by creating a new file with different permissions and renaming it over the old file. But that argument is of little interest to a developer who wanted the permissions to stay the same. I believe #1400 is a correct fix for this, and I am surprised that one exists. I would be happy to have this in atomicwrites. After all its stated motto is to solve this rabbit hole once and for all. We could keep the impl in click simple and have atomicwrites as optional dep for "better" behavior.
2025-04-01T06:39:57.409287
2019-03-21T22:40:35
423972774
{ "authors": [ "davidism", "pfabri", "rsyring" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9483", "repo": "pallets/flask-sqlalchemy", "url": "https://github.com/pallets/flask-sqlalchemy/issues/706" }
gharchive/issue
Mixed-up names in docs explaining relationships I believe that section detailing relationships has a naming mix-up in the model definitions. At first there is: class Person(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(50), nullable=False) addresses = db.relationship('Address', backref='person', lazy=True) But further down it changes to this, where backrefs are explained in more detail: class User(db.Model): # this should read: Person(db.Model)... id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(50), nullable=False) addresses = db.relationship('Address', lazy='select', backref=db.backref('person', lazy='joined')) I think this is quite misleading and it's easy to think that this is a third table, which somehow connects to the two others defined earlier. It certainly had me confused for a while. @pfabri thank you for the report. We will get this fix in the 2.4 release, which should be coming soon. closed in #718
2025-04-01T06:39:57.417586
2018-03-27T10:22:38
308909086
{ "authors": [ "MrLiupython", "davidism" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9484", "repo": "pallets/flask", "url": "https://github.com/pallets/flask/issues/2673" }
gharchive/issue
teardown_request can't remove session's attribute I use before_request to set session's attribute,and use teardown_request to remove it.But only first request don't have session's attribute in before_request,other requests all decide request have session's attribute in before_request.Why? Sample code: from flask import Flask,session app = Flask(__name__) app.secret_key='sdf' @app.route('/') def index(): return """sessoin: {}""".format(session['hello']) @app.before_request def before(): if not session.get('hello'): session['hello'] = 'Hello!' print('before: add hello') else: print('beforeL: have',session['hello']) @app.teardown_request def teardown(exception): if session.get('hello'): session.pop('hello') print('teardown: delete hello') try: print(session['hello']) except: print('teardown: no have') Output: `* Serving Flask app "app" Running on http://<IP_ADDRESS>:5000/ (Press CTRL+C to quit) before: add hello teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:35] "GET / HTTP/1.1" 200 - beforeL: have Hello! teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:35] "GET /favicon.ico HTTP/1.1" 404 - beforeL: have Hello! teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:35] "GET /favicon.ico HTTP/1.1" 404 - beforeL: have Hello! teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:43] "GET / HTTP/1.1" 200 - beforeL: have Hello! teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:46] "GET / HTTP/1.1" 200 - beforeL: have Hello! teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:47] "GET / HTTP/1.1" 200 - beforeL: have Hello! teardown: delete hello teardown: no have <IP_ADDRESS> - - [27/Mar/2018 18:17:50] "GET / HTTP/1.1" 200 -` Python version: 3.6.3 Flask version: 0.12.2 Werkzeug version: 0.12.2 The response can't be modified in teardown. Use after_request instead. Use g to store data during a request, not session.
2025-04-01T06:39:57.419646
2017-11-02T19:25:21
270778465
{ "authors": [ "davidism", "faheel" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9485", "repo": "pallets/flask", "url": "https://github.com/pallets/flask/pull/2509" }
gharchive/pull-request
Fixed typo in docs Changed "when" to "how" as the subsequent text talks about "how to store data on the g object". Thanks, but I'm completely rewriting the tutorial so this won't be relevant. BTW, the PR itself was fine, thanks for making it. If you run into any other documentation fixes, please submit them! 👍 Sure :+1:
2025-04-01T06:39:57.421336
2020-10-07T16:24:17
716676021
{ "authors": [ "davidism", "lalnuo" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9486", "repo": "pallets/flask", "url": "https://github.com/pallets/flask/pull/3781" }
gharchive/pull-request
Raise an error if rule starts or ends with a space This PR solves https://github.com/pallets/flask/issues/3780. After this PR is merged creating routes like this will throw an error: @bp.route("/bar ") @bp.route(" /bar") See #3780 for close reason.
2025-04-01T06:39:57.466179
2012-02-16T23:11:24
3260405
{ "authors": [ "eads" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9487", "repo": "pandaproject/panda", "url": "https://github.com/pandaproject/panda/issues/454" }
gharchive/issue
uWSGI application not found error after install on Ubuntu 11.10 Installing on Ubuntu 11.10, after running the install script I can run /opt/panda/manage.py runserver and voila, I get a functional Panda instance running. But when I go to localhost on port 80, I get this error: "uWSGI error: Python application not found". Will start digging in the logs! Crap, I lost track of this. Here's the log. I'm going to go ahead and try one more time presently. + echo 'PANDA installation beginning.' Building dependency tree... Reading state information... 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/10periodic -O /etc/apt/apt.conf.d/10periodic 2012-02-16 16:22:27 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/10periodic [230/230] -> "/etc/apt/apt.conf.d/10periodic" [1] + service unattended-upgrades restart + apt-get install --yes git openssh-server postgresql python2.7-dev libxml2-dev libxml2 libxslt1.1 libxslt1-dev nginx build-essential openjdk-6-jdk libpq-dev python-pip mercurial Reading package lists... Building dependency tree... Reading state information... build-essential is already the newest version. git is already the newest version. libxslt1-dev is already the newest version. libxslt1.1 is already the newest version. openssh-server is already the newest version. python2.7-dev is already the newest version. nginx is already the newest version. python-pip is already the newest version. libpq-dev is already the newest version. libxml2 is already the newest version. libxml2-dev is already the newest version. openjdk-6-jdk is already the newest version. postgresql is already the newest version. mercurial is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. + pip install uwsgi Requirement already satisfied (use --upgrade to upgrade): uwsgi in /usr/local/lib/python2.7/dist-packages Cleaning up... + ln -s /etc/init.d/ssh /etc/rc2.d/S20ssh ln: creating symbolic link `/etc/rc2.d/S20ssh': File exists + ln -s /etc/init.d/ssh /etc/rc3.d/S20ssh ln: creating symbolic link `/etc/rc3.d/S20ssh': File exists + ln -s /etc/init.d/ssh /etc/rc4.d/S20ssh ln: creating symbolic link `/etc/rc4.d/S20ssh': File exists + ln -s /etc/init.d/ssh /etc/rc5.d/S20ssh ln: creating symbolic link `/etc/rc5.d/S20ssh': File exists + wget -nv http://mirror.uoregon.edu/apache//lucene/solr/3.4.0/apache-solr-3.4.0.tgz -O /opt/apache-solr-3.4.0.tgz 2012-02-16 16:24:28 URL:http://mirror.uoregon.edu/apache//lucene/solr/3.4.0/apache-solr-3.4.0.tgz [83290310/83290310] -> "/opt/apache-solr-3.4.0.tgz" [1] + cd /opt + tar -xzf apache-solr-3.4.0.tgz + mv apache-solr-3.4.0 solr + cp -r solr/example solr/panda + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solr.xml -O /opt/solr/panda/solr/solr.xml 2012-02-16 16:24:46 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solr.xml [378/378] -> "/opt/solr/panda/solr/solr.xml" [1] + mkdir /opt/solr/panda/solr/pandadata mkdir: cannot create directory `/opt/solr/panda/solr/pandadata': File exists + mkdir /opt/solr/panda/solr/pandadata/conf mkdir: cannot create directory `/opt/solr/panda/solr/pandadata/conf': File exists + mkdir /opt/solr/panda/solr/pandadata/lib mkdir: cannot create directory `/opt/solr/panda/solr/pandadata/lib': File exists + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml -O /opt/solr/panda/solr/pandadata/conf/schema.xml 2012-02-16 16:24:46 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml [3090/3090] -> "/opt/solr/panda/solr/pandadata/conf/schema.xml" [1] + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadata/conf/solrconfig.xml 2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadata/conf/solrconfig.xml" [1] + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar -O /opt/solr/panda/solr/pandadata/lib/panda.jar 2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar [1283/1283] -> "/opt/solr/panda/solr/pandadata/lib/panda.jar" [1] + mkdir /opt/solr/panda/solr/pandadata_test mkdir: cannot create directory `/opt/solr/panda/solr/pandadata_test': File exists + mkdir /opt/solr/panda/solr/pandadata_test/conf mkdir: cannot create directory `/opt/solr/panda/solr/pandadata_test/conf': File exists + mkdir /opt/solr/panda/solr/pandadata_test/lib mkdir: cannot create directory `/opt/solr/panda/solr/pandadata_test/lib': File exists + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml -O /opt/solr/panda/solr/pandadata_test/conf/schema.xml 2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml [3090/3090] -> "/opt/solr/panda/solr/pandadata_test/conf/schema.xml" [1] + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadata_test/conf/solrconfig.xml 2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadata_test/conf/solrconfig.xml" [1] + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar -O /opt/solr/panda/solr/pandadata_test/lib/panda.jar 2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar [1283/1283] -> "/opt/solr/panda/solr/pandadata_test/lib/panda.jar" [1] + mkdir /opt/solr/panda/solr/pandadatasets mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets': File exists + mkdir /opt/solr/panda/solr/pandadatasets/conf mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets/conf': File exists + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml -O /opt/solr/panda/solr/pandadatasets/conf/schema.xml 2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml [2388/2388] -> "/opt/solr/panda/solr/pandadatasets/conf/schema.xml" [1] + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadatasets/conf/solrconfig.xml 2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadatasets/conf/solrconfig.xml" [1] + mkdir /opt/solr/panda/solr/pandadatasets_test mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets_test': File exists + mkdir /opt/solr/panda/solr/pandadatasets_test/conf mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets_test/conf': File exists + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml -O /opt/solr/panda/solr/pandadatasets_test/conf/schema.xml 2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml [2388/2388] -> "/opt/solr/panda/solr/pandadatasets_test/conf/schema.xml" [1] + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadatasets_test/conf/solrconfig.xml 2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadatasets_test/conf/solrconfig.xml" [1] + adduser --system --no-create-home --disabled-login --disabled-password --group solr The system user `solr' already exists. Exiting. + chown -R solr:solr /opt/solr + touch /var/log/solr.log + chown solr:solr /var/log/solr.log + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solr.conf -O /etc/init/solr.conf 2012-02-16 16:24:50 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solr.conf [285/285] -> "/etc/init/solr.conf" [1] + initctl reload-configuration + service solr start start: Job is already running: solr + adduser --system --no-create-home --disabled-login --disabled-password --group panda The system user `panda' already exists. Exiting. + mkdir /var/run/uwsgi mkdir: cannot create directory `/var/run/uwsgi': File exists + chown panda:panda /var/run/uwsgi + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/uwsgi.conf -O /etc/init/uwsgi.conf 2012-02-16 16:24:51 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/uwsgi.conf [385/385] -> "/etc/init/uwsgi.conf" [1] + initctl reload-configuration + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/nginx -O /etc/nginx/sites-available/panda 2012-02-16 16:24:51 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/nginx [239/239] -> "/etc/nginx/sites-available/panda" [1] + ln -s /etc/nginx/sites-available/panda /etc/nginx/sites-enabled/panda ln: creating symbolic link `/etc/nginx/sites-enabled/panda': File exists + rm /etc/nginx/sites-enabled/default rm: cannot remove `/etc/nginx/sites-enabled/default': No such file or directory + service nginx restart Restarting nginx: nginx. + wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/pg_hba.conf -O /etc/postgresql/9.1/main/pg_hba.conf 2012-02-16 16:24:53 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/pg_hba.conf [234/234] -> "/etc/postgresql/9.1/main/pg_hba.conf" [1] + service postgresql restart * Restarting PostgreSQL 9.1 database server ...done. + sudo -u postgres psql postgres + echo 'CREATE USER panda WITH PASSWORD '\''panda'\'';' ERROR: role "panda" already exists + sudo -u postgres createdb -O panda panda createdb: database creation failed: ERROR: database "panda" already exists + cd /opt + git clone git://github.com/pandaproject/panda.git panda fatal: destination path 'panda' already exists and is not an empty directory. + cd /opt/panda + pip install -r requirements.txt Downloading/unpacking git+https://github.com/onyxfish/django-ajax-uploader.git#egg=ajaxuploader (from -r requirements.txt (line 9)) Cloning https://github.com/onyxfish/django-ajax-uploader.git to /tmp/pip-2cpisv-build Running setup.py egg_info for package from git+https://github.com/onyxfish/django-ajax-uploader.git#egg=ajaxuploader Downloading/unpacking git+https://github.com/onyxfish/csvkit.git#egg=csvkit (from -r requirements.txt (line 10)) Cloning https://github.com/onyxfish/csvkit.git to /tmp/pip-Y06O5d-build Running setup.py egg_info for package from git+https://github.com/onyxfish/csvkit.git#egg=csvkit Downloading/unpacking git+https://github.com/toastdriven/django-tastypie.git@f51b94025#egg=tastypie (from -r requirements.txt (line 11)) Cloning https://github.com/toastdriven/django-tastypie.git (to f51b94025) to /tmp/pip-fA2v7s-build Could not find a tag or branch 'f51b94025', assuming commit. Running setup.py egg_info for package from git+https://github.com/toastdriven/django-tastypie.git@f51b94025#egg=tastypie Downloading/unpacking git+https://github.com/GoodCloud/django-longer-username.git@cdf0375ec5#egg=longerusername (from -r requirements.txt (line 17)) Cloning https://github.com/GoodCloud/django-longer-username.git (to cdf0375ec5) to /tmp/pip-MdHW3r-build Could not find a tag or branch 'cdf0375ec5', assuming commit. Running setup.py egg_info for package from git+https://github.com/GoodCloud/django-longer-username.git@cdf0375ec5#egg=longerusername Downloading/unpacking hg+https://bitbucket.org/ericgazoni/openpyxl/@134c257abd1e#egg=openpyxl (from -r requirements.txt (line 18)) Cloning hg https://bitbucket.org/ericgazoni/openpyxl/ (to revision 134c257abd1e) to /tmp/pip-LrGgvC-build Running setup.py egg_info for package from hg+https://bitbucket.org/ericgazoni/openpyxl/@134c257abd1e#egg=openpyxl warning: no files found matching '*.xml' under directory 'src/openpyxl/tests/test_data' warning: no files found matching '*.rels' under directory 'src/openpyxl/tests/test_data' warning: no files found matching '*.xlsx' under directory 'src/openpyxl/tests/test_data' warning: no files found matching 'CREDITS' Downloading/unpacking hg+https://bitbucket.org/bkroeze/django-keyedcache/@fa1f452a53f7#egg=django-keyedcache (from -r requirements.txt (line 20)) Cloning hg https://bitbucket.org/bkroeze/django-keyedcache/ (to revision fa1f452a53f7) to /tmp/pip-FPerem-build Running setup.py egg_info for package from hg+https://bitbucket.org/bkroeze/django-keyedcache/@fa1f452a53f7#egg=django-keyedcache Downloading/unpacking hg+https://bitbucket.org/bkroeze/django-livesettings/@a413c0205048#egg=django-livesettings (from -r requirements.txt (line 21)) Cloning hg https://bitbucket.org/bkroeze/django-livesettings/ (to revision a413c0205048) to /tmp/pip-R8c8m3-build Running setup.py egg_info for package from hg+https://bitbucket.org/bkroeze/django-livesettings/@a413c0205048#egg=django-livesettings zip_safe flag not set; analyzing archive contents... Installed /tmp/pip-R8c8m3-build/setuptools_hg-0.4-py2.7.egg Downloading/unpacking django==1.3 (from -r requirements.txt (line 1)) Running setup.py egg_info for package django Downloading/unpacking fabric==1.2.2 (from -r requirements.txt (line 2)) Running setup.py egg_info for package fabric warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no files found matching 'fabfile.py' Downloading/unpacking psycopg2==2.4.1 (from -r requirements.txt (line 3)) Running setup.py egg_info for package psycopg2 warning: no files found matching '*.html' under directory 'doc' warning: no files found matching '*.js' under directory 'doc' warning: no files found matching '*' under directory 'doc/html' no previously-included directories found matching 'doc/src/_build' warning: no files found matching 'MANIFEST' Downloading/unpacking django-celery==2.3.3 (from -r requirements.txt (line 4)) Running setup.py egg_info for package django-celery no previously-included directories found matching 'bin/*.pyc' no previously-included directories found matching 'tests/*.pyc' no previously-included directories found matching 'docs/*.pyc' no previously-included directories found matching 'contrib/*.pyc' no previously-included directories found matching 'djcelery/*.pyc' no previously-included directories found matching 'docs/.build' no previously-included directories found matching 'examples/*.pyc' Downloading/unpacking kombu-sqlalchemy==1.1.0 (from -r requirements.txt (line 5)) Running setup.py egg_info for package kombu-sqlalchemy warning: no files found matching '*' under directory 'djkombu' Downloading/unpacking python-dateutil==1.5.0 (from -r requirements.txt (line 6)) Running setup.py egg_info for package python-dateutil Downloading/unpacking lxml==2.3.1 (from -r requirements.txt (line 7)) Running setup.py egg_info for package lxml Building lxml version 2.3.1. Building without Cython. Using build configuration of libxslt 1.1.26 Building against libxml2/libxslt in the following directory: /usr/lib Requirement already satisfied (use --upgrade to upgrade): httplib2==0.7.1 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 8)) Downloading/unpacking django-compressor==1.1 (from -r requirements.txt (line 12)) Running setup.py egg_info for package django-compressor Downloading/unpacking BeautifulSoup==3.2.0 (from -r requirements.txt (line 13)) Running setup.py egg_info for package BeautifulSoup Downloading/unpacking sphinx==1.0.7 (from -r requirements.txt (line 14)) Running setup.py egg_info for package sphinx no previously-included directories found matching 'doc/_build' Downloading/unpacking requests==0.8.3 (from -r requirements.txt (line 15)) Running setup.py egg_info for package requests Downloading/unpacking south==0.7.3 (from -r requirements.txt (line 16)) Running setup.py egg_info for package south Downloading/unpacking xlrd==0.7.1 (from -r requirements.txt (line 19)) Running setup.py egg_info for package xlrd Downloading/unpacking boto==2.0 (from -r requirements.txt (line 22)) Running setup.py egg_info for package boto Downloading/unpacking argparse==1.2.1 (from csvkit==0.4.2->-r requirements.txt (line 10)) Running setup.py egg_info for package argparse warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyo' found anywhere in distribution warning: no previously-included files matching '*.orig' found anywhere in distribution warning: no previously-included files matching '*.rej' found anywhere in distribution no previously-included directories found matching 'doc/_build' no previously-included directories found matching 'env24' no previously-included directories found matching 'env25' no previously-included directories found matching 'env26' no previously-included directories found matching 'env27' Downloading/unpacking sqlalchemy==0.6.6 (from csvkit==0.4.2->-r requirements.txt (line 10)) Running setup.py egg_info for package sqlalchemy warning: no files found matching '*.jpg' under directory 'doc' no previously-included directories found matching 'doc/build/output' Downloading/unpacking mimeparse (from django-tastypie==0.9.11->-r requirements.txt (line 11)) Running setup.py egg_info for package mimeparse Requirement already satisfied (use --upgrade to upgrade): pycrypto>=1.9 in /usr/lib/python2.7/dist-packages (from fabric==1.2.2->-r requirements.txt (line 2)) Downloading/unpacking paramiko>=1.7.6 (from fabric==1.2.2->-r requirements.txt (line 2)) Running setup.py egg_info for package paramiko Downloading/unpacking django-picklefield (from django-celery==2.3.3->-r requirements.txt (line 4)) Downloading django-picklefield-0.1.9.tar.gz Running setup.py egg_info for package django-picklefield Downloading/unpacking celery>=2.3.1 (from django-celery==2.3.3->-r requirements.txt (line 4)) Running setup.py egg_info for package celery no previously-included directories found matching 'tests/*.pyc' no previously-included directories found matching 'docs/*.pyc' no previously-included directories found matching 'contrib/*.pyc' no previously-included directories found matching 'celery/*.pyc' no previously-included directories found matching 'examples/*.pyc' no previously-included directories found matching 'bin/*.pyc' no previously-included directories found matching 'docs/.build' no previously-included directories found matching 'docs/graffles' no previously-included directories found matching '.tox/*' Downloading/unpacking kombu (from kombu-sqlalchemy==1.1.0->-r requirements.txt (line 5)) Running setup.py egg_info for package kombu Downloading/unpacking django-appconf>=0.4 (from django-compressor==1.1->-r requirements.txt (line 12)) Downloading django-appconf-0.4.1.tar.gz Running setup.py egg_info for package django-appconf Installed /opt/panda/build/django-appconf/versiontools-1.8.3-py2.7.egg Downloading/unpacking Pygments>=0.8 (from sphinx==1.0.7->-r requirements.txt (line 14)) Running setup.py egg_info for package Pygments Downloading/unpacking Jinja2>=2.2 (from sphinx==1.0.7->-r requirements.txt (line 14)) Running setup.py egg_info for package Jinja2 warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no previously-included files matching '*.pyc' found under directory 'jinja2' warning: no previously-included files matching '*.pyc' found under directory 'docs' warning: no previously-included files matching '*.pyo' found under directory 'jinja2' warning: no previously-included files matching '*.pyo' found under directory 'docs' Downloading/unpacking docutils>=0.5 (from sphinx==1.0.7->-r requirements.txt (line 14)) Running setup.py egg_info for package docutils warning: no files found matching 'MANIFEST' warning: no previously-included files matching '.cvsignore' found under directory '*' warning: no previously-included files matching '*.pyc' found under directory '*' warning: no previously-included files matching '*~' found under directory '*' warning: no previously-included files matching '.DS_Store' found under directory '*' Downloading/unpacking anyjson>=0.3.1 (from celery>=2.3.1->django-celery==2.3.3->-r requirements.txt (line 4)) Downloading anyjson-0.3.1.tar.gz Running setup.py egg_info for package anyjson Downloading/unpacking amqplib>=1.0 (from kombu->kombu-sqlalchemy==1.1.0->-r requirements.txt (line 5)) Running setup.py egg_info for package amqplib Installing collected packages: django, fabric, psycopg2, django-celery, kombu-sqlalchemy, python-dateutil, lxml, django-compressor, BeautifulSoup, sphinx, requests, south, xlrd, boto, ajaxuploader, argparse, sqlalchemy, csvkit, mimeparse, django-tastypie, longerusername, openpyxl, django-keyedcache, django-livesettings, paramiko, django-picklefield, celery, kombu, django-appconf, Pygments, Jinja2, docutils, anyjson, amqplib Running setup.py install for django changing mode of build/scripts-2.7/django-admin.py from 644 to 755 changing mode of /usr/local/bin/django-admin.py to 755 Running setup.py install for fabric warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no files found matching 'fabfile.py' Installing fab script to /usr/local/bin Running setup.py install for psycopg2 building 'psycopg2._psycopg' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-i686-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/green.c -o build/temp.linux-i686-2.7/psycopg/green.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/pqpath.c -o build/temp.linux-i686-2.7/psycopg/pqpath.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/utils.c -o build/temp.linux-i686-2.7/psycopg/utils.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/bytes_format.c -o build/temp.linux-i686-2.7/psycopg/bytes_format.o -Wdeclaration-after-statement psycopg/bytes_format.c: In function ‘Bytes_Format’: psycopg/bytes_format.c:114:24: warning: variable ‘orig_args’ set but not used [-Wunused-but-set-variable] gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/connection_int.c -o build/temp.linux-i686-2.7/psycopg/connection_int.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/connection_type.c -o build/temp.linux-i686-2.7/psycopg/connection_type.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/cursor_int.c -o build/temp.linux-i686-2.7/psycopg/cursor_int.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/cursor_type.c -o build/temp.linux-i686-2.7/psycopg/cursor_type.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/lobject_int.c -o build/temp.linux-i686-2.7/psycopg/lobject_int.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/lobject_type.c -o build/temp.linux-i686-2.7/psycopg/lobject_type.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/notify_type.c -o build/temp.linux-i686-2.7/psycopg/notify_type.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/xid_type.c -o build/temp.linux-i686-2.7/psycopg/xid_type.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_asis.c -o build/temp.linux-i686-2.7/psycopg/adapter_asis.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_binary.c -o build/temp.linux-i686-2.7/psycopg/adapter_binary.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_datetime.c -o build/temp.linux-i686-2.7/psycopg/adapter_datetime.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_list.c -o build/temp.linux-i686-2.7/psycopg/adapter_list.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pboolean.c -o build/temp.linux-i686-2.7/psycopg/adapter_pboolean.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pdecimal.c -o build/temp.linux-i686-2.7/psycopg/adapter_pdecimal.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pfloat.c -o build/temp.linux-i686-2.7/psycopg/adapter_pfloat.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_qstring.c -o build/temp.linux-i686-2.7/psycopg/adapter_qstring.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/microprotocols.c -o build/temp.linux-i686-2.7/psycopg/microprotocols.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/microprotocols_proto.c -o build/temp.linux-i686-2.7/psycopg/microprotocols_proto.o -Wdeclaration-after-statement gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/typecast.c -o build/temp.linux-i686-2.7/psycopg/typecast.o -Wdeclaration-after-statement gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/psycopg/psycopgmodule.o build/temp.linux-i686-2.7/psycopg/green.o build/temp.linux-i686-2.7/psycopg/pqpath.o build/temp.linux-i686-2.7/psycopg/utils.o build/temp.linux-i686-2.7/psycopg/bytes_format.o build/temp.linux-i686-2.7/psycopg/connection_int.o build/temp.linux-i686-2.7/psycopg/connection_type.o build/temp.linux-i686-2.7/psycopg/cursor_int.o build/temp.linux-i686-2.7/psycopg/cursor_type.o build/temp.linux-i686-2.7/psycopg/lobject_int.o build/temp.linux-i686-2.7/psycopg/lobject_type.o build/temp.linux-i686-2.7/psycopg/notify_type.o build/temp.linux-i686-2.7/psycopg/xid_type.o build/temp.linux-i686-2.7/psycopg/adapter_asis.o build/temp.linux-i686-2.7/psycopg/adapter_binary.o build/temp.linux-i686-2.7/psycopg/adapter_datetime.o build/temp.linux-i686-2.7/psycopg/adapter_list.o build/temp.linux-i686-2.7/psycopg/adapter_pboolean.o build/temp.linux-i686-2.7/psycopg/adapter_pdecimal.o build/temp.linux-i686-2.7/psycopg/adapter_pfloat.o build/temp.linux-i686-2.7/psycopg/adapter_qstring.o build/temp.linux-i686-2.7/psycopg/microprotocols.o build/temp.linux-i686-2.7/psycopg/microprotocols_proto.o build/temp.linux-i686-2.7/psycopg/typecast.o -lpq -o build/lib.linux-i686-2.7/psycopg2/_psycopg.so warning: no files found matching '*.html' under directory 'doc' warning: no files found matching '*.js' under directory 'doc' warning: no files found matching '*' under directory 'doc/html' no previously-included directories found matching 'doc/src/_build' warning: no files found matching 'MANIFEST' Running setup.py install for django-celery changing mode of build/scripts-2.7/djcelerymon from 644 to 755 no previously-included directories found matching 'bin/*.pyc' no previously-included directories found matching 'tests/*.pyc' no previously-included directories found matching 'docs/*.pyc' no previously-included directories found matching 'contrib/*.pyc' no previously-included directories found matching 'djcelery/*.pyc' no previously-included directories found matching 'docs/.build' no previously-included directories found matching 'examples/*.pyc' changing mode of /usr/local/bin/djcelerymon to 755 Installing djcelerymon script to /usr/local/bin Running setup.py install for kombu-sqlalchemy warning: no files found matching '*' under directory 'djkombu' Found existing installation: python-dateutil 1.4.1 Uninstalling python-dateutil: Successfully uninstalled python-dateutil Running setup.py install for python-dateutil Running setup.py install for lxml Building lxml version 2.3.1. Building without Cython. Using build configuration of libxslt 1.1.26 Building against libxml2/libxslt in the following directory: /usr/lib building 'lxml.etree' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -lxslt -lexslt -lxml2 -lz -lm -o build/lib.linux-i686-2.7/lxml/etree.so building 'lxml.objectify' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/usr/include/python2.7 -c src/lxml/lxml.objectify.c -o build/temp.linux-i686-2.7/src/lxml/lxml.objectify.o -w gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/src/lxml/lxml.objectify.o -lxslt -lexslt -lxml2 -lz -lm -o build/lib.linux-i686-2.7/lxml/objectify.so Running setup.py install for django-compressor Running setup.py install for BeautifulSoup Running setup.py install for sphinx no previously-included directories found matching 'doc/_build' Installing sphinx-build script to /usr/local/bin Installing sphinx-quickstart script to /usr/local/bin Installing sphinx-autogen script to /usr/local/bin Running setup.py install for requests Running setup.py install for south Running setup.py install for xlrd changing mode of build/scripts-2.7/runxlrd.py from 644 to 755 changing mode of /usr/local/bin/runxlrd.py to 755 Running setup.py install for boto changing mode of build/scripts-2.7/sdbadmin from 644 to 755 changing mode of build/scripts-2.7/elbadmin from 644 to 755 changing mode of build/scripts-2.7/cfadmin from 644 to 755 changing mode of build/scripts-2.7/s3put from 644 to 755 changing mode of build/scripts-2.7/fetch_file from 644 to 755 changing mode of build/scripts-2.7/launch_instance from 644 to 755 changing mode of build/scripts-2.7/list_instances from 644 to 755 changing mode of build/scripts-2.7/taskadmin from 644 to 755 changing mode of build/scripts-2.7/kill_instance from 644 to 755 changing mode of build/scripts-2.7/bundle_image from 644 to 755 changing mode of build/scripts-2.7/pyami_sendmail from 644 to 755 changing mode of build/scripts-2.7/lss3 from 644 to 755 changing mode of build/scripts-2.7/cq from 644 to 755 changing mode of build/scripts-2.7/route53 from 644 to 755 changing mode of /usr/local/bin/bundle_image to 755 changing mode of /usr/local/bin/cq to 755 changing mode of /usr/local/bin/list_instances to 755 changing mode of /usr/local/bin/cfadmin to 755 changing mode of /usr/local/bin/s3put to 755 changing mode of /usr/local/bin/route53 to 755 changing mode of /usr/local/bin/taskadmin to 755 changing mode of /usr/local/bin/lss3 to 755 changing mode of /usr/local/bin/fetch_file to 755 changing mode of /usr/local/bin/elbadmin to 755 changing mode of /usr/local/bin/kill_instance to 755 changing mode of /usr/local/bin/sdbadmin to 755 changing mode of /usr/local/bin/launch_instance to 755 changing mode of /usr/local/bin/pyami_sendmail to 755 Running setup.py install for ajaxuploader Running setup.py install for argparse warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyo' found anywhere in distribution warning: no previously-included files matching '*.orig' found anywhere in distribution warning: no previously-included files matching '*.rej' found anywhere in distribution no previously-included directories found matching 'doc/_build' no previously-included directories found matching 'env24' no previously-included directories found matching 'env25' no previously-included directories found matching 'env26' no previously-included directories found matching 'env27' Running setup.py install for sqlalchemy warning: no files found matching '*.jpg' under directory 'doc' no previously-included directories found matching 'doc/build/output' Running setup.py install for csvkit changing mode of build/scripts-2.7/in2csv from 644 to 755 changing mode of build/scripts-2.7/csvcut from 644 to 755 changing mode of build/scripts-2.7/csvsql from 644 to 755 changing mode of build/scripts-2.7/csvclean from 644 to 755 changing mode of build/scripts-2.7/csvstat from 644 to 755 changing mode of build/scripts-2.7/csvlook from 644 to 755 changing mode of build/scripts-2.7/csvjoin from 644 to 755 changing mode of build/scripts-2.7/csvstack from 644 to 755 changing mode of build/scripts-2.7/csvsort from 644 to 755 changing mode of build/scripts-2.7/csvgrep from 644 to 755 changing mode of build/scripts-2.7/csvjson from 644 to 755 changing mode of /usr/local/bin/in2csv to 755 changing mode of /usr/local/bin/csvclean to 755 changing mode of /usr/local/bin/csvgrep to 755 changing mode of /usr/local/bin/csvsort to 755 changing mode of /usr/local/bin/csvjoin to 755 changing mode of /usr/local/bin/csvstat to 755 changing mode of /usr/local/bin/csvlook to 755 changing mode of /usr/local/bin/csvstack to 755 changing mode of /usr/local/bin/csvcut to 755 changing mode of /usr/local/bin/csvjson to 755 changing mode of /usr/local/bin/csvsql to 755 Running setup.py install for mimeparse Running setup.py install for django-tastypie Running setup.py install for longerusername Running setup.py install for openpyxl warning: no files found matching '*.xml' under directory 'src/openpyxl/tests/test_data' warning: no files found matching '*.rels' under directory 'src/openpyxl/tests/test_data' warning: no files found matching '*.xlsx' under directory 'src/openpyxl/tests/test_data' warning: no files found matching 'CREDITS' Running setup.py install for django-keyedcache Running setup.py install for django-livesettings Running setup.py install for paramiko Running setup.py install for django-picklefield Running setup.py install for celery no previously-included directories found matching 'tests/*.pyc' no previously-included directories found matching 'docs/*.pyc' no previously-included directories found matching 'contrib/*.pyc' no previously-included directories found matching 'celery/*.pyc' no previously-included directories found matching 'examples/*.pyc' no previously-included directories found matching 'bin/*.pyc' no previously-included directories found matching 'docs/.build' no previously-included directories found matching 'docs/graffles' no previously-included directories found matching '.tox/*' Installing celeryctl script to /usr/local/bin Installing celeryd script to /usr/local/bin Installing camqadm script to /usr/local/bin Installing celeryev script to /usr/local/bin Installing celeryd-multi script to /usr/local/bin Installing celerybeat script to /usr/local/bin Running setup.py install for kombu Running setup.py install for django-appconf Running setup.py install for Pygments Installing pygmentize script to /usr/local/bin Running setup.py install for Jinja2 warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no previously-included files matching '*.pyc' found under directory 'jinja2' warning: no previously-included files matching '*.pyc' found under directory 'docs' warning: no previously-included files matching '*.pyo' found under directory 'jinja2' warning: no previously-included files matching '*.pyo' found under directory 'docs' Running setup.py install for docutils changing mode of build/scripts-2.7/rst2html.py from 644 to 755 changing mode of build/scripts-2.7/rst2s5.py from 644 to 755 changing mode of build/scripts-2.7/rst2latex.py from 644 to 755 changing mode of build/scripts-2.7/rst2xetex.py from 644 to 755 changing mode of build/scripts-2.7/rst2man.py from 644 to 755 changing mode of build/scripts-2.7/rst2xml.py from 644 to 755 changing mode of build/scripts-2.7/rst2pseudoxml.py from 644 to 755 changing mode of build/scripts-2.7/rstpep2html.py from 644 to 755 changing mode of build/scripts-2.7/rst2odt.py from 644 to 755 changing mode of build/scripts-2.7/rst2odt_prepstyles.py from 644 to 755 warning: no files found matching 'MANIFEST' warning: no previously-included files matching '.cvsignore' found under directory '*' warning: no previously-included files matching '*.pyc' found under directory '*' warning: no previously-included files matching '*~' found under directory '*' warning: no previously-included files matching '.DS_Store' found under directory '*' changing mode of /usr/local/bin/rst2pseudoxml.py to 755 changing mode of /usr/local/bin/rst2man.py to 755 changing mode of /usr/local/bin/rst2xetex.py to 755 changing mode of /usr/local/bin/rst2html.py to 755 changing mode of /usr/local/bin/rst2s5.py to 755 changing mode of /usr/local/bin/rstpep2html.py to 755 changing mode of /usr/local/bin/rst2xml.py to 755 changing mode of /usr/local/bin/rst2odt.py to 755 changing mode of /usr/local/bin/rst2odt_prepstyles.py to 755 changing mode of /usr/local/bin/rst2latex.py to 755 Running setup.py install for anyjson Running setup.py install for amqplib Successfully installed django fabric psycopg2 django-celery kombu-sqlalchemy python-dateutil lxml django-compressor BeautifulSoup sphinx requests south xlrd boto ajaxuploader argparse sqlalchemy csvkit mimeparse django-tastypie longerusername openpyxl django-keyedcache django-livesettings paramiko django-picklefield celery kombu django-appconf Pygments Jinja2 docutils anyjson amqplib Cleaning up... + mkdir /var/log/panda mkdir: cannot create directory `/var/log/panda': File exists + touch /var/log/panda/panda.log + chown -R panda:panda /var/log/panda + mkdir /var/lib/panda mkdir: cannot create directory `/var/lib/panda': File exists + mkdir /var/lib/panda/uploads mkdir: cannot create directory `/var/lib/panda/uploads': File exists + mkdir /var/lib/panda/exports mkdir: cannot create directory `/var/lib/panda/exports': File exists + mkdir /var/lib/panda/media mkdir: cannot create directory `/var/lib/panda/media': File exists + chown -R panda:panda /var/lib/panda + sudo -u panda -E python manage.py syncdb --noinput Syncing... Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_admin_log Creating table django_site Creating table south_migrationhistory Creating table celery_taskmeta Creating table celery_tasksetmeta Creating table djcelery_intervalschedule Creating table djcelery_crontabschedule Creating table djcelery_periodictasks Creating table djcelery_periodictask Creating table djcelery_workerstate Creating table djcelery_taskstate Creating table livesettings_setting Creating table livesettings_longsetting Creating table panda_category Creating table panda_taskstatus Creating table panda_dataset_categories Creating table panda_dataset Creating table panda_dataupload Creating table panda_export Creating table panda_notification Creating table panda_relatedupload Creating table panda_userprofile Installing custom SQL ... Installing indexes ... No fixtures found. Synced: > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.messages > django.contrib.admin > django.contrib.sites > django.contrib.staticfiles > south > djcelery > compressor > livesettings > panda Not synced (use migrations): - longerusername - tastypie (use ./manage.py migrate to migrate these) + sudo -u panda -E python manage.py migrate --noinput Running migrations for longerusername: - Migrating forwards to 0001_initial. > longerusername:0001_initial - Loading initial data for longerusername. No fixtures found. Running migrations for tastypie: - Migrating forwards to 0001_initial. > tastypie:0001_initial - Loading initial data for tastypie. No fixtures found. + sudo -u panda -E python manage.py loaddata panda/fixtures/init_panda.json Installed 8 object(s) from 1 fixture(s) + sudo -u panda -E python manage.py collectstatic --noinput Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/gis/move_vertex_off.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/gis/move_vertex_on.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-restore-8bit.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-restore.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/default-bg-reverse.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-search.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_success.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/arrow-up.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-removeall.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-delete-8bit.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-remove.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-right_over.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/changelist-bg.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/deleted-overlay.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon-unknown.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/chooser_stacked-bg.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-right.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_changelink.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon-yes.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-arrowright_over.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_clock.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/nav-bg-grabber.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/changelist-bg_rtl.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_deletelink.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-delete.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-add.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_error.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_addlink.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_alert.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_searchbox.png' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-left.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-add_over.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/default-bg.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-addall.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector_stacked-remove.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-left_over.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/nav-bg-reverse.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-add.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/arrow-down.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/chooser-bg.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector_stacked-add.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-arrowright.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon-no.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_calendar.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/nav-bg.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-splitter-bg.gif' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/dashboard.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/rtl.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/widgets.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/login.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/changelists.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/forms.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/base.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/ie.css' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/calendar.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/prepopulate.min.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/core.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/SelectFilter2.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/SelectBox.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/LICENSE-JQUERY.txt' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/timeparse.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/inlines.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/dateparse.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/inlines.min.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/urlify.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/compress.py' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/jquery.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/actions.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/getElementsBySelector.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/collapse.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/jquery.min.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/collapse.min.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/prepopulate.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/jquery.init.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/actions.min.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/admin/RelatedObjectLookups.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/admin/ordering.js' Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/admin/DateTimeShortcuts.js' Copying '/opt/panda/panda/static/panda_user_change_form.js' Copying '/opt/panda/client/static/img/desc.gif' Copying '/opt/panda/client/static/img/no-sort.gif' Copying '/opt/panda/client/static/img/progress.png' Copying '/opt/panda/client/static/img/ajax-loader.gif' Copying '/opt/panda/client/static/img/glyphicons-halflings.png' Copying '/opt/panda/client/static/img/asc.gif' Copying '/opt/panda/client/static/img/panda_and_ire.png' Copying '/opt/panda/client/static/img/glyphicons-halflings-white.png' Copying '/opt/panda/client/static/css/reset.css' Copying '/opt/panda/client/static/css/fileuploader.css' Copying '/opt/panda/client/static/css/bootstrap.css' Copying '/opt/panda/client/static/css/panda.css' Copying '/opt/panda/client/static/css/loading.gif' Copying '/opt/panda/client/static/js/application.js' Copying '/opt/panda/client/static/js/utils.js' Copying '/opt/panda/client/static/js/SpecRunner.html' Copying '/opt/panda/client/static/js/models/datasets.js' Copying '/opt/panda/client/static/js/models/data_uploads.js' Copying '/opt/panda/client/static/js/models/tasks.js' Copying '/opt/panda/client/static/js/models/users.js' Copying '/opt/panda/client/static/js/models/related_uploads.js' Copying '/opt/panda/client/static/js/models/categories.js' Copying '/opt/panda/client/static/js/models/notifications.js' Copying '/opt/panda/client/static/js/models/data.js' Copying '/opt/panda/client/static/js/spec/mock_xhr_responses.js' Copying '/opt/panda/client/static/js/spec/models/datasets.js' Copying '/opt/panda/client/static/js/spec/models/tasks.js' Copying '/opt/panda/client/static/js/spec/routers/index.js' Copying '/opt/panda/client/static/js/spec/views/not_found.js' Copying '/opt/panda/client/static/js/spec/views/root.js' Copying '/opt/panda/client/static/js/lib/fileuploader.js' Copying '/opt/panda/client/static/js/lib/jasmine-sinon.js' Copying '/opt/panda/client/static/js/lib/jquery.tablesorter.js' Copying '/opt/panda/client/static/js/lib/jasmine-jquery.js' Copying '/opt/panda/client/static/js/lib/json2.js' Copying '/opt/panda/client/static/js/lib/bootstrap.js' Copying '/opt/panda/client/static/js/lib/backbone.js' Copying '/opt/panda/client/static/js/lib/jquery.cookie.js' Copying '/opt/panda/client/static/js/lib/underscore.js' Copying '/opt/panda/client/static/js/lib/moment.js' Copying '/opt/panda/client/static/js/lib/sinon-1.2.0.js' Copying '/opt/panda/client/static/js/lib/backbone-tastypie.js' Copying '/opt/panda/client/static/js/lib/bootbox.js' Copying '/opt/panda/client/static/js/lib/jquery-1.7.1.js' Copying '/opt/panda/client/static/js/lib/jasmine-1.1.0/jasmine_favicon.png' Copying '/opt/panda/client/static/js/lib/jasmine-1.1.0/jasmine-html.js' Copying '/opt/panda/client/static/js/lib/jasmine- Durrr, that's not all of the log...however the issue seems to be having had installed uwsgi already, which needed to be restarted. + service uwsgi start start: Job is already running: uwsgi Closing as a duplicate of #440, which, if implemented, could account for restarting already running services.
2025-04-01T06:39:57.696153
2020-12-02T02:13:23
754871929
{ "authors": [ "smancill" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9488", "repo": "pandoc/lua-filters", "url": "https://github.com/pandoc/lua-filters/pull/149" }
gharchive/pull-request
Fix table-short-captions Support for Pandoc 2.10 is not working, and all the tests are broken too, they do not check anything. This PR also fixes Pandoc 2.10 support, and also fixes the tests to ensure they do run (and updates them to the changes introduced by 7a49118). Commit 6e1b2cc introduced support for Pandoc 2.10 but it doesn't work because it treats the new Table.caption.long object as the pre 2.10 Table.caption object, but they are completely different lists. Table.caption.long is a list of Blocks, where each block has a content key which is a List of inlines (or it should be, a test would be needed for more complex captions?). The pre 2.10 Table.caption is a List of inlines, so they cannot share the same code to be manipulated. This PR fixes it by just splitting the code into two different functions. They do follow the same structure, but it is better to keep them separated, otherwise it would require too many ifs, or code that is not clear. For pre 2.10 Table.caption just keep the code as it is. For the new Table.caption.long, ensure the outer list of Blocks is taken into account when searching for the inline Span with the short caption. Now all test do run and all pass, with pre and post Pandoc 2.10 (nix-run-version is a helper that allows me to use nixpkgs releases to run different versions of a command) Before this PR (well, not really, this PR also fixes the tests to see them failing. Before the last commit fixing 2.10 support): nix-run-version 19.09 pandoc make && echo "no errors" pandoc 2.7.3 Compiled with pandoc-types 1.17.6, texmath <IP_ADDRESS>, skylighting 0.8.2 no errors $ nix-run-version 20.09 pandoc make && echo "no errors" pandoc 2.10.1 Compiled with pandoc-types 1.21, texmath <IP_ADDRESS>, skylighting 0.8.5 Output does not contain `\def\pandoctableshortcapt{} % .unlisted`. make: *** [test] Error 1 With this PR: $ nix-run-version 19.09 pandoc make && echo "no errors" pandoc 2.7.3 Compiled with pandoc-types 1.17.6, texmath <IP_ADDRESS>, skylighting 0.8.2 no errors $ nix-run-version 20.09 pandoc make && echo "no errors" pandoc 2.10.1 Compiled with pandoc-types 1.21, texmath <IP_ADDRESS>, skylighting 0.8.5 no errors $ nix-run-version unstable pandoc make && echo "no errors" pandoc 2.11.2 Compiled with pandoc-types 1.22, texmath <IP_ADDRESS>, skylighting <IP_ADDRESS>, citeproc 0.2, ipynb <IP_ADDRESS> no errors @blake-riley please review. Agh, it works with every Pandoc version I tried on my Mac. I need to figure out why the CI is failing. Rebased to fix the CI ($(</dev/stdin) was not working so I went back to $(cat -)).
2025-04-01T06:39:57.713096
2016-03-26T03:00:15
143657635
{ "authors": [ "chiuan", "pangweiwei" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9489", "repo": "pangweiwei/slua", "url": "https://github.com/pangweiwei/slua/issues/134" }
gharchive/issue
LuaState.checkRef > u.act will be NullReference i dont know why,but throw this Exception in sometimes. i think it is a bug,maybe delegate had been gced
2025-04-01T06:39:57.734972
2017-10-19T15:11:29
266879663
{ "authors": [ "alexgig", "cscalfani" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9490", "repo": "panosoft/elm-grove", "url": "https://github.com/panosoft/elm-grove/issues/4" }
gharchive/issue
Render documentation by default Unless it is configured otherwise, have Grove render documentation by default. This minimizes the need for configuration and gives the user the best experience right out of the box. I thought about doing this but after surveying our 20 to 30 repos, I realized that documentation would be generated on far fewer than half of them, hence the default. I didn't want a bunch of repos with empty elm-docs directories. If you don't like this for your situation, you can configure Grove globally to generate documents by default and then turn them off explicitly on a case by case basis. Ya I have it globally configured to create documentation. It seems our case would be a good one for using the --local --docs=off option instead of having that be the default. In the general case, developers will use the Elm documentation format since the compiler requires placeholders at least. Since those will be there by default, it makes sense to render them by default. I'm sure now that we have this tool, moving forward documenting in the code will be our default. Long term I think it makes sense. Having function specific documentation is helpful for libraries. But as things get more complicated, API documentation is less and less useful. A great example can be seen at nodegit. At first the documentation seems really nice. But it's just API docs. And when I starting using this library in Grove I found it sorely lacking. What I needed for this library was how to use these functions together to accomplish larger goals. The example code they provided and their own source code was more useful to me than these extensive function by function docs. Once I had that as a resource, I nearly stopped looking at the API docs. This is why I don't plan to automatically create docs for elm-slate (when I finally get around to documenting it). It's just too complicated to expect that API docs are going to be useful. I expect that the documentation will consist of conceptual documentation, examples and most important, a few reference implementations.
2025-04-01T06:39:57.785379
2019-04-14T00:03:33
432914641
{ "authors": [ "ataylorme" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9491", "repo": "pantheon-training-org/eseguinte-drupalcon-seattle-2019", "url": "https://github.com/pantheon-training-org/eseguinte-drupalcon-seattle-2019/pull/6" }
gharchive/pull-request
Update Composer dependencies (2019-04-14-00-03) Loading composer repositories with package information Updating dependencies Writing lock file Generating optimized autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles Visual regression test passed! View the visual regression test report
2025-04-01T06:39:57.796109
2021-07-25T09:24:48
952233737
{ "authors": [ "pantryfight" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9492", "repo": "pantryfight/rangitoto-reporter", "url": "https://github.com/pantryfight/rangitoto-reporter/pull/11" }
gharchive/pull-request
Update Posts “technologyvshumantrafficking” Automatically generated by Netlify CMS 👷 Deploy Preview for rangitoto-reporter processing. 🔨 Explore the source changes: 14f4034e313ddec9f71b90fb1be1dd1467420fb1 🔍 Inspect the deploy log: https://app.netlify.com/sites/rangitoto-reporter/deploys/60fd2de1d46c3a00073a97db
2025-04-01T06:39:57.812635
2020-01-06T18:03:36
545859979
{ "authors": [ "huornlmj", "jsirois" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9493", "repo": "pantsbuild/pex", "url": "https://github.com/pantsbuild/pex/issues/848" }
gharchive/issue
Logically dead code in requests auth.py https://github.com/pantsbuild/pex/blob/a2b6d0a645824e5ed20e422f34482b60e0e7cdd6/pex/vendor/_vendored/pip/pip/_vendor/requests/auth.py#L223 Line 175, entdig = None Line 222, if entdig: cannot be true Line 223 is never reached Closing as wont fix as explained in #844.
2025-04-01T06:39:57.838276
2024-07-15T10:51:05
2408433534
{ "authors": [ "loocapro", "shekhirin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9494", "repo": "paradigmxyz/reth-exex-examples", "url": "https://github.com/paradigmxyz/reth-exex-examples/issues/3" }
gharchive/issue
ExEx that registers an RLPx subprotocol Similar to https://github.com/paradigmxyz/reth/issues/7130, but integrated in as an ExEx. This should include a custom RLPx protocol, for example, broadcasting decoded logs or similar over p2p. May I get this @shekhirin ?
2025-04-01T06:39:57.849718
2023-11-05T01:50:04
1977589562
{ "authors": [ "ChrisTorresLugo", "Rjected", "gakonst", "theforager" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9495", "repo": "paradigmxyz/reth", "url": "https://github.com/paradigmxyz/reth/issues/5298" }
gharchive/issue
OOM and MerkleExecute Error While Syncing Describe the bug I've been trying to sync a new reth node (with Lighthouse) but am running into issues syncing the node and having it killed by OOM errors. Issues: Process is repeatedly killed when syncing due to OOM memory When running MerkleExecute, a validation error is encountered at the end of the checkpoint stage checkpoint, which doesn't kill the process but doesn't seem to advance beyond Upon restart, the node will randomly reset to a 98.3% checkpoint in AccountHashing even after its shutdown and restarted at a later checkpoint The prior stages are synced far from the current head Setup: i5-1235U Mini PC, 32 GB memory, 4 TB SSD Ubuntu 22.04.3 LTS reth 0.1.0-alpha.10 + lighthouse 4.2.0 Troubleshooting Steps: I've tried restarting this process several times including gracefully exiting along the way to try and advance the check pointing but had issues above. I increased swap space from 2 GB to 16 GB. I also tried shutting off Lighthouse to try and finish pipelining nodes. I've also monitored the resource usage and see only ~20% memory and swap usage when reth is running. Logs: [1] OOM Errors from grep -i oom /var/log/syslog Nov 4 18:44:04 rethship kernel: [613255.141797] tokio-runtime-w invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 Nov 4 18:44:04 rethship kernel: [613255.141824] oom_kill_process+0x108/0x1c0 Nov 4 18:44:04 rethship kernel: [613255.141829] __alloc_pages_may_oom+0x117/0x1e0 Nov 4 18:44:04 rethship kernel: [613255.141958] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Nov 4 18:44:04 rethship kernel: [613255.141976] [ 636] 108 636 3707 192 65536 128 -900 systemd-oomd Nov 4 18:44:04 rethship kernel: [613255.142266] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service/app.slice/app-org.gnome.Terminal.slice/vte-spawn-a346d30d-cd63-4709-ac50-8f1a2cbcceeb.scope,task=reth,pid=981137,uid=1000 Nov 4 18:44:04 rethship kernel: [613255.142326] Out of memory: Killed process 981137 (reth) total-vm:4376905076kB, anon-rss:27667340kB, file-rss:128kB, shmem-rss:0kB, UID:1000 pgtables:2043212kB oom_score_adj:0 Nov 4 18:44:04 rethship systemd[1]<EMAIL_ADDRESS>A process of this unit has been killed by the OOM killer. Nov 4 18:44:04 rethship systemd[1728]: vte-spawn-a346d30d-cd63-4709-ac50-8f1a2cbcceeb.scope: A process of this unit has been killed by the OOM killer. Nov 4 18:44:06 rethship kernel: [613257.682761] oom_reaper: reaped process 981137 (reth), now anon-rss:204kB, file-rss:128kB, shmem-rss:0kB [2] See node logs below [3] Logs upon restart 2023-11-05T01:27:13.175346Z WARN consensus::engine: Pipeline sync progress is inconsistent first_stage_checkpoint=18200983 inconsistent_stage_id=AccountHashing inconsistent_stage_checkpoint=17933127 2023-11-05T01:27:13.181836Z INFO reth::node::events: Executing stage pipeline_stages=1/13 stage=Headers from=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.183994Z INFO execute{stage=Headers}: sync::stages::headers: Target block already reached checkpoint=100.0% target=Hash(0x7f0f336dafd02579116db16639213ad274de6802821e640e7a2ca951b4ac0e5e) 2023-11-05T01:27:13.184061Z INFO reth::node::events: Stage finished executing pipeline_stages=1/13 stage=Headers block=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184069Z INFO reth::node::events: Executing stage pipeline_stages=2/13 stage=TotalDifficulty from=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184073Z INFO reth::node::events: Stage finished executing pipeline_stages=2/13 stage=TotalDifficulty block=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184106Z INFO reth::node::events: Executing stage pipeline_stages=3/13 stage=Bodies from=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184735Z INFO reth::node::events: Stage finished executing pipeline_stages=3/13 stage=Bodies block=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184748Z INFO reth::node::events: Executing stage pipeline_stages=4/13 stage=SenderRecovery from=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184757Z INFO reth::node::events: Stage finished executing pipeline_stages=4/13 stage=SenderRecovery block=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184763Z INFO reth::node::events: Executing stage pipeline_stages=5/13 stage=Execution from=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184904Z INFO execute{stage=MerkleUnwind}: sync::stages::merkle::unwind: Stage is always skipped 2023-11-05T01:27:13.184914Z INFO reth::node::events: Stage finished executing pipeline_stages=5/13 stage=Execution block=18200983 checkpoint=100.0% eta=unknown 2023-11-05T01:27:13.184919Z INFO reth::node::events: Executing stage pipeline_stages=6/13 stage=MerkleUnwind from=18200983 checkpoint=18200983 eta=unknown 2023-11-05T01:27:13.184922Z INFO reth::node::events: Stage finished executing pipeline_stages=6/13 stage=MerkleUnwind block=18200983 checkpoint=18200983 eta=unknown 2023-11-05T01:27:13.184974Z INFO reth::node::events: Executing stage pipeline_stages=7/13 stage=AccountHashing from=17933127 checkpoint=98.3% eta=unknown 2023-11-05T01:27:16.176960Z INFO reth::cli: Status connected_peers=1 stage=AccountHashing checkpoint=98.3% eta=unknown 2023-11-05T01:27:41.177238Z INFO reth::cli: Status connected_peers=16 stage=AccountHashing checkpoint=98.3% eta=unknown Steps to reproduce Unsure Node logs 2023-11-04T22:21:41.943424Z INFO reth::node::events: Stage committed progress pipeline_stages=9/13 stage=MerkleExecute block=17933127 checkpoint=99.9% eta=4s 2023-11-04T22:21:43.101493Z INFO reth::node::events: Stage committed progress pipeline_stages=9/13 stage=MerkleExecute block=17933127 checkpoint=99.9% eta=3s 2023-11-04T22:21:45.066253Z INFO reth::cli: Status connected_peers=84 stage=MerkleExecute checkpoint=99.9% eta=1s 2023-11-04T22:21:45.408990Z INFO reth::node::events: Stage committed progress pipeline_stages=9/13 stage=MerkleExecute block=17933127 checkpoint=100.0% eta=0s 2023-11-04T22:21:46.032948Z WARN execute{stage=MerkleExecute}: sync::stages::merkle: Failed to verify block state root target_block=18200983 got=0x823da0e726af6e6c62fd9d0a2f8cc6270a83019cbf59a6bce386f5ba9642c3f2 expected=SealedHeader { header: Header { parent_hash: 0xf2f6cb468c897676ebc423d3e71ec1765913222a72dce83ac0b6d85ddcced07c, ommers_hash: 0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347, beneficiary: 0x4838b106fce9647bdf1e7877bf73ce8b0bad5f97, state_root: 0xbb09e5726aac4165aae76903a03f697650d4f86c9db65bfd6f8590fca9b1eb2a, transactions_root: 0xede23d510c6e60afb89e493dc41e012828c4604409f611ee2d3bbbe5b9bf9f2d, receipts_root: 0x7a32d6f19103688a4c176357e25dae09a29aef103c9a52d7f3a5bd40892500cd, withdrawals_root: Some(0x5549b704ab4127950b498748eab9700556a86712c262808ad13226db078de894), logs_bloom: 0x2a619118d0c498000b83060ac900122c6e92008118901c4538556025788140e400c614629018682106105748075237a31a421418aa070a091a3e042000740c03e6aa4019432288e64928acea4e2eb06a902606222cc68222bac02210862028b0a6e000926a9e10930faae186c04089545431821914210534a600459ca8d0090010c04023e88609b120c42f9002b011614080681129549058a4d480499a1001388aa6ed6840d82c84881e10959a589436a28404a26613561ab0a300ea08842005384c204305821404d8692155109f1254020e1caa8d000812592ac8425041a0c02452f2504c50921a2cd46608684593a398009030010ba066702c88309029c8f0, difficulty: 0x0_U256, number: 18200983, gas_limit: 29970705, gas_used: 29967592, timestamp:<PHONE_NUMBER>, mix_hash: 0x7c474df647faacac2251c724cc264942e3a0aab0e8d7657126ad3e1c75dfa3a1, nonce: 0, base_fee_per_gas: Some(6881759407), blob_gas_used: None, excess_blob_gas: None, parent_beacon_block_root: None, extra_data: Bytes(0x546974616e2028746974616e6275696c6465722e78797a29) }, hash: 0x7f0f336dafd02579116db16639213ad274de6802821e640e7a2ca951b4ac0e5e } 2023-11-04T22:21:46.035592Z ERROR execute{stage=MerkleExecute}: sync::pipeline: Stage encountered a validation error: Block state root (0x823da0e726af6e6c62fd9d0a2f8cc6270a83019cbf59a6bce386f5ba9642c3f2) is different from expected: (0xbb09e5726aac4165aae76903a03f697650d4f86c9db65bfd6f8590fca9b1eb2a) stage=MerkleExecute bad_block=18200983 2023-11-04T22:22:10.066552Z INFO reth::cli: Status connected_peers=85 stage=MerkleExecute checkpoint=100.0% eta=unknown 2023-11-04T22:22:35.066721Z INFO reth::cli: Status connected_peers=85 stage=MerkleExecute checkpoint=100.0% eta=unknown [...] 2023-11-04T23:43:25.067648Z INFO reth::cli: Status connected_peers=97 stage=MerkleExecute checkpoint=100.0% eta=unknown 2023-11-04T23:43:50.081770Z INFO reth::cli: Status connected_peers=96 stage=MerkleExecute checkpoint=100.0% eta=unknown ./run_reth.sh: line 4: 981137 Killed RUST_LOG=info /home/user/dev/reth/target/release/reth node --http --http.api debug,eth,net,trace,txpool,web3,rpc --http.addr <IP_ADDRESS> Platform(s) Linux (x86) What version/commit are you on? reth 0.1.0-alpha.10 (a60dbfdd) What database version are you on? Current database version: 1 Local database version: 1 What type of node are you running? Archive (default) What prune config do you use, if any? N/A If you've built Reth from source, provide the full command you used cargo build --release Code of Conduct [X] I agree to follow the Code of Conduct Hi folks - has this been encountered again? Hey @gakonst! Yeah, I keep hitting this OOM error. Happy to help triage. I ended up clearing my state and resyncing, and I haven't had the issue again. But it definitely seems like there's some bug lurking in here. I'll leave this open for others to add in. Following @theforager's suggestion, I also cleared the state and resynced. My node is now tracking the tip. Thanks! I'm considering this fixed with https://github.com/paradigmxyz/reth/pull/7364 - users who upgraded from beta.4 or earlier will have to change their config like we recommend in the release notes: https://github.com/paradigmxyz/reth/releases/tag/v0.2.0-beta.5 [stages.merkle] -clean_threshold = 50000 +clean_threshold = 5000
2025-04-01T06:39:57.852205
2024-02-08T21:30:49
2126101903
{ "authors": [ "ThreeHrSleep", "i-m-aditya", "mattsse" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9496", "repo": "paradigmxyz/reth", "url": "https://github.com/paradigmxyz/reth/pull/6496" }
gharchive/pull-request
reuse alloy eips constants Resolves #6489 sorry for blocking the issue for so long 😅 I'm a rust-newbie trying to figure this out on the fly, it's taking me some time to getting familiar with this code base (& rust itself) very sorry for the inconvenience all good :) this is not critical, so no rush happy to help
2025-04-01T06:39:57.866403
2024-05-21T14:07:30
2308401123
{ "authors": [ "roopa0222", "tjcouch-sil" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9497", "repo": "paranext/paranext-core", "url": "https://github.com/paranext/paranext-core/issues/904" }
gharchive/issue
KeyNotFoundException is thrown when Project Settings property is used for the first time . Describe the bug In my extension testing I observed that when using useProjectSettings hook the values displayed on the very first load are not coming from the projectSettings.json file, instead coming from the placeholders on the hook initialization. To Reproduce Steps to reproduce the behavior: Set up projectSettings.json with the below information "paranextExtTesting.highlightColor_projectSetting_TC13": { "label": "%paranextExtTesting.highlightColor%", "description": "%paranextExtTesting.highlightColorDescription%", "default": "Aqua", "includeProjectTypes": ["ParatextStandard"], "excludeProjectTypes": null }, On the webview use the PAPI useProjectSettings hook const [projectColor, setProjectColorInternally] = useProjectSetting( 'ParatextStandard', 'b4c501ad2538989d6fb723518e92408406e232d3', 'paranextExtTesting.highlightColor_projectSetting_TC13', 'Green', ); You will find below exception in the main.log and warning in console.log [2024-05-20 17:21:00.390] [warn] Tried to retrieve data immediately for Setting with selector "paranextExtTesting.highlightColor_projectSetting_TC13", but it threw. Error: System.Collections.Generic.KeyNotFoundException: The given key 'paranextExtTesting.highlightColor_projectSetting_TC13' was not present in the dictionary. at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Paranext.DataProvider.Projects.ParatextProjectDataProvider.GetProjectSetting(String jsonKey) in C:\Repos\paranext-core\c-sharp\Projects\ParatextProjectDataProvider.cs:line 222 at Paranext.DataProvider.Projects.ProjectDataProvider.HandleRequest(String functionName, JsonArray args) in C:\Repos\paranext-core\c-sharp\Projects\ProjectDataProvider.cs:line 83 The UI displays the values from the hook instead of defaults from the projectSettings.json Expected behavior Default values from the projectSettings.json should be displayed on the UI Matt and I discussed and decided to fix a couple things and leave one other thing for the moment. Hopefully we'll get to the other thing before long: (Fixed) Paratext project settings throw instead of getting the default (Fixed) Paratext project settings updates don't go out properly Paratext project settings don't support numbers or objects #906
2025-04-01T06:39:57.936293
2021-09-27T15:16:20
1008279802
{ "authors": [ "Dolivent", "pareeohnos" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9498", "repo": "pareeohnos/ktrade", "url": "https://github.com/pareeohnos/ktrade/issues/28" }
gharchive/issue
Improved notifications Notifications in the app are a little messy at the moment. If for instance you place a huge order that is filled in multiple parts, you will get one notification for each part in quick succession. Instead, the app should attempt to replace existing notifications an be clever. For example, if you place a huge order you should get one to say something like "Order filling", then it would update to say "Order filling: 1000/3000", then eventually update to success saying it's filled. Also, when we trim, the status of the trade switches to 'canceled'. Should this field refer to the stop loss or something else? hm yeah that might be the stop loss cancellation interfering. Clearly a log bug. I'll open a new bug for that
2025-04-01T06:39:57.937833
2024-04-17T12:06:11
2248131083
{ "authors": [ "Bullrich" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9499", "repo": "paritytech/cmd-action", "url": "https://github.com/paritytech/cmd-action/pull/15" }
gharchive/pull-request
added generation of command documentation Generates in the GitHub summary a list of all the available commands with some information on their usage and (soon to be added) parameters. Resolves #12 Find a working example here
2025-04-01T06:39:57.951551
2018-11-23T14:46:59
383849350
{ "authors": [ "amaurymartiny", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9500", "repo": "paritytech/js-libs", "url": "https://github.com/paritytech/js-libs/pull/53" }
gharchive/pull-request
Clean up repo Bump all packages to 3.0.0 All packages now require local packages. Which means when we modify one, all the other immediately know of the changes. added back the old api folder, because all tests pass there. That's most of the file changes in the file diff. to run those tests: yarn test:api in root folder, or yarn test in api folder in api, only converted utils/ and format/ folder to TS (related #21) TODO: [ ] lerna version Pull Request Test Coverage Report for Build 250 229 of 305 (75.08%) changed or added relevant lines in 14 files are covered. 5 unchanged lines in 3 files lost coverage. Overall coverage increased (+22.5%) to 74.99% Changes Missing Coverage Covered Lines Changed/Added Lines % packages/api/src/util/encode.ts 4 5 80.0% packages/light.js/src/rpc/utils/createRpc.ts 2 3 66.67% packages/api/src/format/input.ts 45 53 84.91% packages/api/src/format/output.ts 149 215 69.3% Files with Coverage Reduction New Missed Lines % packages/electron/src/getParityPath.ts 1 0.0% packages/electron/src/fetchParity.ts 1 0.0% packages/contracts/src/badgereg.ts 3 0.0% Totals Change from base Build 247: 22.5% Covered Lines: 1338 Relevant Lines: 1794 💛 - Coveralls
2025-04-01T06:39:57.981261
2023-07-12T14:39:51
1801126031
{ "authors": [ "mrcnski", "ordian" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:9501", "repo": "paritytech/pvf-checker", "url": "https://github.com/paritytech/pvf-checker/issues/4" }
gharchive/issue
Update to separate worker binaries Once https://github.com/paritytech/polkadot/pull/7337 is merged. This will probably require a workspace setup. Please let me know if you run into any issues. Any help with testing before https://github.com/paritytech/polkadot/pull/7337 is merged would be awesome.