id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1890624612
Summarize agent_scratchpad when it exceeds n tokens Feature request Similarly to memory=ConversationSummaryBufferMemory(llm=llm, max_token_limit=n) passed in initialize_agent, there should be a possibility to pass ConversationSummaryBufferMemory like-object which would summarize the intermediate_steps in the agent if the agent_scratchpad created from the intermediate_steps exceeds n tokens Motivation Agents can run out of the context window when solving a complex problem with tools. Your contribution I can't commit to anything for now. Did you solve this? I am looking to solve the same issue when the agent loops and continues to pull extra documents from the vector store. I think what needs to happen is the scratchpad almost needs to be run through a document compressor? Thoughts? Hi! I have not developed a solution. For now, I just moved on to other things. I saw that there is a intermediate_steps variable in /langchain/agents/agent.py:1029 which is than changed to the text agent_scratchpad. IMHO the solution is either to compress intermediate_steps or the resulting agent_scratchpad. TBH I hoped for any input from dosu bot. I'd suggest that anyone who is interested in this feature should "up vote" the issue - maybe we will be able to gain some momentum. :) I had the same thought, I will try and work on compressing the agent_scratchpad inside the loop and let you know if I have any success. @dosu-bot thank you for answer. However, my understanding of the problem is a little bit different. I would like to summarize agent_scratchpad created from intermediate_steps within one prompt from the user, so that complex problem can be solved. Your solution adds content of the agent_scratchpad to the memory of the entire conversation, this not a desired behavior. Could you adjust your reponse? I have to be the one to ask dosu bot :) @dosu-bot thank you for answer. However, my understanding of the problem is a little bit different. I would like to summarize agent_scratchpad created from intermediate_steps within one prompt from the user, so that complex problem can be solved. Your solution adds content of the agent_scratchpad to the memory of the entire conversation, this not a desired behavior. Could you adjust your reponse? At least we know where to start looking now, I wouldn't be using 'sumy' but an extra call to the existing given LLM. I haven't had time to progress this personally. Did someone achieve this? I am trying to retreive mails but as soon as I request more than two the agent stops because the token length is exceeded. If I could summarize the scratchpad this wouldn't happen. I tried to adapt the toolkit I used and add an additional chain for summarization after the retreive step of on tool but fail to add the chain as an intermediate step so summarizing the scratchpad would work as well I think. This should definitely not be closed. There is still no way to easily achieve this in the libaray and is a major blocker Agreed. Major blocker. @mdziezyc did you find a proper solution to this issue?
gharchive/issue
2023-09-11T14:16:50
2025-04-01T06:44:45.779567
{ "authors": [ "Jasonthefirst", "SpBills", "mdziezyc", "obujacz", "rhamnett", "zunairazaman" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/issues/10446", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2271619900
Installation of langchain on miniconda with the conda install langchain -c conda-forge fails Checked other resources [X] I added a very descriptive title to this issue. [X] I searched the LangChain documentation with the integrated search. [X] I used the GitHub search to find a similar question and didn't find it. [X] I am sure that this is a bug in LangChain rather than my code. [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). Example Code conda install langchain -c conda-forge Error Message and Stack Trace (if applicable) Collecting package metadata: done Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: langchain Current channels: https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/linux-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. Description Installation problem System Info pip freeze | grep langchain as expected none Python 3.7.3 miniconda on linux Python 3.7.3 is no longer supported. Please upgrade to a more recent version of python. Closing as likely not an issue. Feel free to re-open if unable to install with newer version of python
gharchive/issue
2024-04-30T14:25:21
2025-04-01T06:44:45.787701
{ "authors": [ "eyurtsev", "vmahdev" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/issues/21084", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2409950747
cannot import name 'cast' from 'typing_extensions' Checked other resources [X] I added a very descriptive title to this issue. [X] I searched the LangChain documentation with the integrated search. [X] I used the GitHub search to find a similar question and didn't find it. [X] I am sure that this is a bug in LangChain rather than my code. [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). Example Code from langchain.agents import AgentType, initialize_agent Error Message and Stack Trace (if applicable) tests/langchain/test_langchain_model_export.py:19: in from langchain.agents import AgentType, initialize_agent /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/langchain/agents/init.py:36: in from langchain_core.tools import Tool, tool /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/langchain_core/tools.py:48: in from typing_extensions import Annotated, cast, get_args, get_origin E ImportError: cannot import name 'cast' from 'typing_extensions' (/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/typing_extensions.py) Description Langchain should pin typing_extensions>=4.7.0 (instead of 4.2.0) in the current dev version, otherwise we'll get cannot import name 'cast' from 'typing_extensions' error System Info Using langchain master branch. typing_extensions==4.5.0 fails @baskaryan Will we have a patch release recently to include this fix?
gharchive/issue
2024-07-16T01:14:16
2025-04-01T06:44:45.793218
{ "authors": [ "serena-ruan" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/issues/24287", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1940394668
Add real support for RetryParsers? For retry parser to work, the prompt needs to be available. I don't see how this is hooked up in the current implementation of LLMChain. The LLMChain must provide the prompt and the corresponding generation to the appropriate output parser (I.e. the llm code must invoke parse_with_prompt). The real output parser could either use the base implementation of parse_with_prompt (i.e. simply call parse) or override it. WIP: Needs tests, unsure if schema changes affects consumers(don't think a lot of people are subclassing output_parser). If my assessment is wrong, I will drop the work @baskaryan I am not sure if RetryOutputParsers are properly supported. I had to do these changes to do retries with the booleanparser used by the llm filter chain (Please see https://github.com/langchain-ai/langchain/issues/11408). Please let me know if the changes make sense? Any update on this PR? @sudranga - perhaps you need to tag someone to review it? @baskaryan Let me know if these changes look appropriate. Any update @sudranga @baskaryan ? @timxieICN I don't have anything to add to this PR. @baskaryan What are your thoughts? Currently we cannot really use RetryParser. It expects to use parse_with_prompt instead of parse. Not easy to add it to a given chain. Any news on this PR? @louisoutin I don't know. For some reason, there's no interest in merging this PR. If one of the repo owners agrees that this PR is useful, i will resolve the branch conflicts appropriately. @sudranga one naive question, new to LangChain here. I was going over the documentation of Retry Parsers, how to collect the bad_response in the code. When i add the parser to a LLM Chain and if the output from the llm is not structured i was getting the RuntimeError from the parser. To implement the RetryOutputParser you need the bad_response/partial_completion from the previous chain to pass right? How do i get that, in the documentation they initialised the bad_response instead of collecting from the chain directly @hwchase17 @baskaryan I'm not sure what you want to do with this PR? If you think the PR is appropriate, i can resolve the conflicts. Apologies for the slow review! Pr has some merge conflicts, happy to re-review if you'd like to resolve @baskaryan Please check. @baskaryan Do you have any concerns? @baskaryan I redid the pull request(to address merge conflicts) because i assumed you would take a look. Now, there is a merge conflict again. This particular topic of retry parsers seems to be of interest to folks. It takes time and effort to produce a PR. I'm disappointed that this effort has been of no value towards improving langchain. @hwchase17 ^^ Any updates on this? @sudranga thanks for putting in the effort here, and sorry for the delay in reviewing. At this point in time, I am going to close this PR. we are considering how to best support retries for output parsers and will likely involve a larger refactor. I very much apologize for the delay in review and poor communication here
gharchive/pull-request
2023-10-12T16:48:05
2025-04-01T06:44:45.800825
{ "authors": [ "Kirushikesh", "austinmw", "baskaryan", "hwchase17", "louisoutin", "sudranga", "timxieICN" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/pull/11719", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1863814847
docs: DeepLake example Updated the Deep Lake example. Added a link to an example provided by Activeloop. cc @adolkhan @baskaryan No feedback from author
gharchive/pull-request
2023-08-23T18:25:22
2025-04-01T06:44:45.802446
{ "authors": [ "baskaryan", "leo-gan" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/pull/9663", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2018409765
Prisma vectorstore filter cannot handle uuid's Our Prisma schema defines a column of a table as: someId String @db.Uuid. We generate a uuid, which is a string and while adding data to the database using addModels the id is properly saved. However using the column as a filter when using the vectorstore we receive the following postgres error: ERROR: operator does not exist: uuid = text. return PrismaVectorStore.withModel<Document>(prismaClient).create( new OpenAIEmbeddings({ ... }), { prisma: Prisma, tableName: 'Document', vectorColumnName: 'vector', columns: { id: PrismaVectorStore.IdColumn, content: PrismaVectorStore.ContentColumn, someId: true, }, filter: { someId: { equals: ${id}, }}, }, How can we use uuid columns as a filter? Currently as a workaround we remove the @db.Uuid from the schema. good bot, this allowed us to modify the filter to pass custom SQL WHERE statements into our queries since our metadata is a JSON field.
gharchive/issue
2023-11-30T11:09:44
2025-04-01T06:44:45.804557
{ "authors": [ "Knordy", "kpotter-m2" ], "repo": "langchain-ai/langchainjs", "url": "https://github.com/langchain-ai/langchainjs/issues/3457", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2436216201
[ BUG ] Image not being sent to the model using langgraph and gpt-4o Checked other resources [X] I added a very descriptive title to this issue. [X] I searched the LangGraph/LangChain documentation with the integrated search. [X] I used the GitHub search to find a similar question and didn't find it. [ ] I am sure that this is a bug in LangGraph/LangChain rather than my code. [x] I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question. Example Code ** Here is not all the code, only the main parts. ** self.llm = ChatOpenAI(temperature=0, model="gpt-4o").bind_tools(self.tools) def should_continue(self, state: MessagesState) -> Literal["tools", '__end__']: messages = state['messages'] last_message = messages[-1] if last_message.tool_calls: return "tools" return END def call_model(self, state: MessagesState): messages = state['messages'] agent_scratchpad = state.get("agent_scratchpad", []) prompt = self.prompt.format( chat_history=messages, input=messages[-1].content, agent_scratchpad=agent_scratchpad ) response = self.llm.invoke(prompt) return {"messages": [response]} workflow = StateGraph(MessagesState) tool_node = ToolNode(tools) workflow.add_node("agent", self.call_model) workflow.add_node("tools", tool_node) workflow.set_entry_point("agent") workflow.add_conditional_edges( "agent", self.should_continue, ) workflow.add_edge("tools", 'agent') pool = ConnectionPool( conninfo=f"{os.getenv('DATABASE_URL')}", max_size=50, ) checkpointer = PostgresSaver(sync_connection=pool) checkpointer.create_tables(pool) graph = workflow.compile(checkpointer=checkpointer) graph.invoke({ "messages": [HumanMessage( content=[ {"type": "text", "text": "Describe the image below."}, {"type": "image_url", "image_url": {"url": "https://s3-prod.cogmo.com.br/shared/cat.jpg"}}, ] )] }, config={ "configurable": { "thread_id": session_id, "recursion_limit": 50, } } ) Error Message and Stack Trace (if applicable) No response Description When using langgraph to send an image to the model, the image is not being sent correctly. The model responds with the message: "I currently don't have the capability to view or describe images. However, if you provide me with some details about the image, I'd be happy to help you with any information or tasks related to it!" Through langsmith, it is possible to see that the image is being sent, but the model does not respond based on it. System Info Plataform: Windows 11 langchain==0.2.11 langgraph==0.1.14 langsmith=0.1.93 is it possible to share a public langsmith trace? is it possible to share a public langsmith trace? https://smith.langchain.com/public/73fa25b1-4221-4993-8a1a-b2a6f5f9fb63/r I tried passing both the URL and the base64 of the image. Using base64, I get the error RateLimitError: Error code: 429 - {'error': {'message': 'Request too large for gpt-4'}} got it. I think you are just formatting the prompt incorrectly. What is your prompt template? It looks like you chat_history=messages may be getting formatted into a string? Peguei. Acho que você está apenas formatando o prompt incorretamente. Qual é o seu modelo de prompt? Parece que você pode estar sendo formatado em uma string?chat_history=messages This is my Template so i think the issue is you are doing input=messages[-1].content This grabs the content from the last message (a dictionary) and puts it into a single string human message. ("human", "{input}") I think you probably don't need the input parameter at all? since you have messages[-1] already in the chat_history variable. But if you do, you should insert the whole message, not just the content as a string (because if the content is not a string then it will get messed up) então eu acho que o problema é que você está fazendo input=messages[-1].content Isso pega o conteúdo da última mensagem (um dicionário) e o coloca em uma única mensagem humana de string. ("humano", "{input}") Acho que você provavelmente não precisa do parâmetro? já que você já tem na variável chat_history. Mas se você fizer isso, você deve inserir a mensagem inteira, não apenas o conteúdo como uma string (porque se o conteúdo não for uma string, ele ficará confuso)input``messages[-1] I removed the .content, but it didn't solve it. RUN: "https://smith.langchain.com/public/64dfa1b3-882b-468c-8908-11932b5577ad/r" I modified it so that input is not needed (removed it from the prompt template and the call model). It didn't solve it, but it really wasn't necessary. hmm does something like this work for you? from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder model = ChatOpenAI(model="gpt-4o") messages = [HumanMessage( content=[ {"type": "text", "text": "Describe the image below."}, {"type": "image_url", "image_url": {"url": "https://s3-prod.cogmo.com.br/shared/cat.jpg"}}, ] )] prompt = ChatPromptTemplate.from_messages([ ("system", "be helpful"), ("placeholder", "{m}"), ]) chain = prompt | model chain.invoke({"m": messages}) If I send it directly to the model, everything works fine, so much so that this is how I 'worked around' the problem temporarily, I built a vision tool for the agent. But I would like to get rid of this workaround lol gpt_vision_tool: from typing import Any, Type from langchain_core.pydantic_v1 import BaseModel, Field from langchain_core.tools import BaseTool, ToolException from langchain_core.messages.human import HumanMessage from langchain_openai import ChatOpenAI from app.services.bucket_storage import MinioManager class VisionSchema(BaseModel): query: str = Field( description="Question asked by the user about an image to be analyzed and answered." ) image_id: str = Field( description="Image ID to be analyzed." ) class Vision(BaseTool): name: str = "gpt_vision_tool" description: str = ( "This tool uses GPT-4o to allow the agent to interact with images." ) args_schema: Type[BaseModel] = VisionSchema llm: ChatOpenAI = None s3_bucket: MinioManager = None def __init__(self, **data: Any): super().__init__(**data) self.llm = ChatOpenAI(temperature=0, model="gpt-4o") self.s3_bucket = MinioManager() def get_message_schema(self, query: str, image_id: str) -> HumanMessage: image_url = self.s3_bucket.get_object_url(image_id) message = HumanMessage( content=[ {"type": "text", "text": query}, { "type": "image_url", "image_url": {"url": f"{image_url}"}, }, ], ) return message def _run( self, query: str, image_id: str ) -> str: try: return self.llm.invoke([self.get_message_schema(query, image_id)]).content except Exception as e: raise ToolException(f"Error: {e}") async def _arun( self, query: str, image_id: str ) -> str: raise NotImplementedError( "Async version of this tool is not implemented.")``` I think the issue is that you have ("human", "{input}") - this is saying to take the input, format it as a string inside a human message. what you want to do is pass it more directly as the entire message. you are passing all messages in already (in chat_history) - so why are you passing in messages[-1] in input? that should already be passed in. I would probably just remove the ("human", "{input}") from the prompt template entirely? Prompt Template: Agent Node: Test: Even completely simplifying the prompt RUN: "https://smith.langchain.com/public/3d0ce4ef-9316-405c-94b8-741a0a766970/r" When sending the image directly to the model, it receives the list of messages correctly. But when I send it with the langgraph, everything is passed within the content. ah - i see. when you call prompt.format it formats it to a string. you want to call prompt.format_messages Ok, I am an idiot. It's working, that was it, thank you very much for the help and understanding <3 Ok, I am an idiot. It's working, that was it, thank you very much for the help and understanding <3 thats for the patience, took me a while to spot 😅
gharchive/issue
2024-07-29T19:45:58
2025-04-01T06:44:45.825458
{ "authors": [ "HELIOPOTELICKI", "hwchase17" ], "repo": "langchain-ai/langgraph", "url": "https://github.com/langchain-ai/langgraph/issues/1161", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1889612811
Issue: langsmith don't work and can't trace any data Issue you'd like to raise. I tried langsmith in google colab ` from langchain.chat_models import ChatOpenAI import os os.environ['OPENAI_API_KEY'] = 'My_OPENAI_API_KEY' os.environ['LANGCHAIN_TRACING_V2'] = 'true' os.environ['LANGCHAIN_ENDPOINT'] = 'https://api.smith.langchain.com' os.environ['LANGCHAIN_API_KEY'] = 'My_LANGCHAIN_API_KEY' os.environ['LANGCHAIN_PROJECT'] = 'default' from langchain.chat_models import ChatOpenAI llm = ChatOpenAI() llm.predict("Hello, world!") ` response: Hello! How can I assist you today? But langsmith can't capture any data and I didn't find errors anywhere. PS: ALL my API_KEY is validate Suggestion: No response Thanks for raising this issue! Could you please confirm: What langchain / langsmith version are you using? Is your API key your personal API key? or an organization API key? In. your collab notebook, try running the following: from langsmith import Client client = Client() url = next(client.list_runs(project_name="default")).url print(url) If that works, then the traces are available, just a bit hard to discover! Hi, I have the same problem. I can see no traces in the langsmith app. Your code works, but the URL returns a 404 message, when I open it in the browser. Hi, I too am facing this error openai.api_key = os.getenv("OPENAI_API_KEY") os.environ['OPENAI_API_KEY'] = os.getenv("OPENAI_API_KEY") os.environ["LANGCHAIN_API_KEY"] = str(os.getenv("LANGCHAIN_API_KEY")) os.environ["LANGCHAIN_TRACING_V2"]= "true" os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com" os.environ["LANGCHAIN_PROJECT"]= str(os.getenv("LANGCHAIN_PROJECT")) Here's the stack trace raise ls_utils.LangSmithConnectionError( langsmith.utils.LangSmithConnectionError: Connection error caused failure to get http://localhost:1984/sessions in LangSmith API. Please confirm your LANGCHAIN_ENDPOINT. ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=1984): Max retries exceeded with url: /sessions?limit=1&name=Little-Guy (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc751dabb20>: Failed to establish a new connection: [Errno 61] Connection refused'))") Hi @rakeshsarma it looks like you may be setting LANGCHAIN_ENDPOINT as http://localhost:1984 someplace before this code. Could you confirm? Hi @hinthornw , Thank you for the for the response. This issue was solved. I didn't do anything. Langsmith started working after a few mins. Going to close as stale for now. Let me know if you still have issues @hinthornw Hi, I'm facing the same issue right now. After I deleted my test project, it stopped tracing my chains Same problem, yesterday worked like a charm. Today it is not tracing anything :/ Same issue here. I still see older traces in the project, but no new traces are collected Same problem here, yesterday it was working like a charm. Today it is not tracing anything :/ Now is tracing them again. Problem solved for me Unfortunetly for me the same issue. Out of nowhere langsmith stopped tracing chain calls 😞 I have traces from 40 minutes ago and now I can't get tracing to work. I resolved the problem by correcting the naming of my environment variables from LANGSMITH_ENDPOINT -> LANGCHAIN_ENDPOINT, etc. Hello I encountered a similar issue, do make sure you load the env before running the langsmith client. from dotenv import load_dotenv from langsmith import Client load_dotenv() client = Client() I'm having a similar issue using list_runs from langsmith client For me, the issue was my VSCode didn't pick up .env file changes without closing and reopening the terminal for some reason. for me it wasn't working when i've used it with vscode and ipynb file - i had to reopen the file and run the cells again and it's working fine. If traces aren't showing and you also aren't seeing log errors like "failed to connect" or "failed to batch ingest", it is almost always because of environment misconfiguration. The environment most have have tracing enabled (LANGSMITH_TRACING=true or LANGCHAIN_TRACING_V2=true - case sensitive) AND you must have your API key set. Environment lookup is cached, so if you're working in a notebook and set the environment after already running LangChain/LangSmith tracing code, you will likely have to restart your kernel.
gharchive/issue
2023-09-11T04:05:29
2025-04-01T06:44:45.841435
{ "authors": [ "ChaoZhou2023", "Daethyra", "FiliRezGelly", "Sulomus", "andrujuanoo", "ariellasmo", "bufgix", "fahmidme", "hinthornw", "lekpeng", "leoschet", "loopdeloop76", "lury", "rakeshsarma" ], "repo": "langchain-ai/langsmith-sdk", "url": "https://github.com/langchain-ai/langsmith-sdk/issues/216", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2367721004
Better copilot introduction needed Describtion So right now, when you go to the chats in the webapp and open one of them, you'll see a popup with information about the new copilot feature: but the problem is, that it uses the same card style thingy like you can find on the signin page, which is in my opinion not the greatest. Describe the solution I think a much better way to intruduce the copilot would be, to make some kind of a tooltip that would appear right on the checkbox when you first load the chat (based on the localStorage CapacitorStorage.copilotInstructionsSeen property). Related w/ #753
gharchive/issue
2024-06-22T09:51:48
2025-04-01T06:44:45.891943
{ "authors": [ "Honzoraptor31415", "xuelink" ], "repo": "langx/langx", "url": "https://github.com/langx/langx/issues/845", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2327696933
纯小白,没看懂,如何开启token轮询模式 services: chat2api: image: lanqian528/chat2api:latest container_name: chat2api restart: unless-stopped ports: - '5005:5005' volumes: - ./data:/app/data # 挂载一些需要保存的数据 environment: - TZ=Asia/Shanghai # 设置时区 - ARKOSE_TOKEN_URL=http://arkose:5006/token # 已内置,不要改动 - AUTHORIZATION="123456" 是这样的吗, 这个 123456 是要当成APi key来传入吗,测试了一下不行,求大佬解解疑惑,谢谢! 依旧不可以 TAT 无法解答,自己不好好看说明 好久没看邮箱,大佬教训的是,我下次一定好好看说明,那个问题已经解决了,是我的key写的格式不对,谢谢大佬 在 2024年5月31日 @.***> 写道: Closed #75 as completed. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
gharchive/issue
2024-05-31T12:08:38
2025-04-01T06:44:45.948374
{ "authors": [ "jingshaoxiang", "lanqian528" ], "repo": "lanqian528/chat2api", "url": "https://github.com/lanqian528/chat2api/issues/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
247439245
Windows IE and Edge Test [x] 文章中a标签相互嵌套报错【Edge】 [x] 为 Sourcemap 中的源文件增加 nginx 路由 [x] Video.js 引入的视频封面未平铺显示【Edge&IE】 [ ] 回到顶部的图标消失【IE<=11】 [ ] MathJax 加载失败 【IE<=11】 [x] 嵌套的 a 标签警告,参考资料使用 ref01 锚点【IE】 [ ] 脚本报错,无法获取未定义或 null 引用的属性 "add"【IE=9】 If you using HTTPS for your website and add CSP headers, you will want upgradeinsecurerequests too. IE / Edge don't support this feature, but I don't want any HTTP mixed-content with my blog, I choose to drop a bit support for this two browsers. [x] IE 11 MathJax 全部失效 [x] IE 9-10 MathJax 仅行内失效 [x] Edge MathJax 有效
gharchive/issue
2017-08-02T15:58:34
2025-04-01T06:44:45.958777
{ "authors": [ "laozhu" ], "repo": "laozhu/hugo-nuo", "url": "https://github.com/laozhu/hugo-nuo/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1514647799
Parse into file, line, column [ ] Added an entry to CHANGELOG.md if this change could be valuable to users Addresses https://github.com/lapce/lapce/issues/1872 You could have the lines as a separate optional argument, let clap parse it into an Option and then only do the manipulation on paths if that Option is Some in order to preserve the existing CLI behaviour i.e. // This allows `lapce +10 foo.txt` #[clap(short = '+', action] maybe_line: Option<u32>, paths: Vec<PathBuf> and then if let Some(line) = cli.maybe_line { // your logic goes here let (paths, _lines, _columns) = cli.paths ... ... } I gotta start somewhere. I think a good place to start is to stop interpreting : and after as part of filename in CLI arguments. If it looks good so far, then I can start figuring out how to handle : and after in the CLI argument Codecov Report Merging #1886 (14f4232) into master (97d626d) will increase coverage by 0.03%. The diff coverage is 0.00%. @@ Coverage Diff @@ ## master #1886 +/- ## ========================================= + Coverage 8.49% 8.52% +0.03% ========================================= Files 130 130 Lines 56586 56348 -238 ========================================= - Hits 4805 4804 -1 + Misses 51781 51544 -237 Impacted Files Coverage Δ lapce-data/src/data.rs 0.00% <0.00%> (ø) lapce-data/src/keypress/keypress.rs 73.02% <0.00%> (-0.21%) :arrow_down: lapce-ui/src/app.rs 0.00% <0.00%> (ø) lapce-rpc/src/file.rs 0.00% <0.00%> (ø) lapce-ui/src/title.rs 0.00% <0.00%> (ø) lapce-ui/src/window.rs 0.00% <0.00%> (ø) lapce-data/src/proxy.rs 0.00% <0.00%> (ø) lapce-data/src/editor.rs 0.00% <0.00%> (ø) lapce-data/src/update.rs 0.00% <0.00%> (ø) lapce-ui/src/settings.rs 0.00% <0.00%> (ø) ... and 4 more :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more is it good now? I applied feedback. Looking forward to next feedback round! Tip: Look at this https://doc.rust-lang.org/stable/std/path/struct.Path.html#method.canonicalize, and you find the answer where is another bug. is there any way to make it more concise? PJ pointed out clap::Arg::parse_values() but it's not obvious how or if this could simplify code. Blocked on https://github.com/lapce/lapce/pull/1964#discussion_r1067651722
gharchive/pull-request
2022-12-30T17:00:48
2025-04-01T06:44:45.975509
{ "authors": [ "JustForFun88", "amab8901", "codecov-commenter", "joshuagawley" ], "repo": "lapce/lapce", "url": "https://github.com/lapce/lapce/pull/1886", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
136582774
MySQL on update current_timestamp There is a way to create column CURRENT_TIMESTAMP using useCurrent() method but not the CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP. It's available to set default value as raw DB value: $table->timestamp('created_at')->default(DB::raw('CURRENT_TIMESTAMP')); $table->timestamp('updated_at')->default(DB::raw('CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP')); But in this case there would be an issue with using SQLite database in unit tests, because there is no support of ON UPDATE in it. The only way I found recently is what @terdelyi described in issue #12060 If the first TIMESTAMP column in a table is not specified as NULLable and doesn't have explicit DEFAULT or ON UPDATE value specified, it automatically gets DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP as attributes. So you can swap created_at and updated_at columns and receive the hacky solution. $table->timestamp('updated_at'); $table->timestamp('created_at')->useCurrent(); Will produce: `updated_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, I understand that Eloquent is responsive to manage timestamp columns correct values and there were said enough to use nullable() and nullableTimestamps() but what do you think about adding one more method to set default CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP to timestamp columns? There could be situations when there is a need to create such column, but there would be problems with testing :( And rely on hacky way to solve such task isn't so obvious and could bring mistakes in future. I would very much like to have this feature. Or a fix for $table->timestamps that produces the correct created_at and updated_at behaviour. I also met the same problem, I hope to add a useUpdate() parameters on time column Since this is an old issue and there doesn't seem to be any steps to take regarding it I'm going to close it, but please feel free to propose any changes by opening a Pull Request or use the laravel/internals repository to post a proposal. Thanks.
gharchive/issue
2016-02-26T03:08:59
2025-04-01T06:44:46.004485
{ "authors": [ "a-komarev", "jonahpatriarche", "reatang", "themsaid" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/12490", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
152744428
Problem with \Illuminate\Routing\UrlGenerator::replaceRouteParameters The $path = preg_replace_callback(...) expects all parameters to be present and in correct order. That causes problems when certain parameter isn't given. Moreover, when first parameter happens to be array, you will get Array to string converion error. Example case: // routes.php Route::get('/route/to/{something}/{optional?}', ['as' => 'some.named.route', 'uses' => 'Controller@action']); // generating url route('some.named.route', ['q' => ['x', 'y']]); // Note that I'm not passing "optional?" parameter. This will fail with Array to string conversion error in \Illuminate\Routing\UrlGenerator::replaceRouteParameters. Here's my solution: Replace array_shift($parameters) with, for example, self::matchAndRemove($match, $parameters). protected static function matchAndRemove($match, array &$parameters) { preg_match('/\{([\w\d_]+)\?\}/i', $match[0], $keys); if (!isset($keys[1])) { return ''; } $key = $keys[1]; $value = ''; if (isset($parameters[$key]) && !is_array($parameters[$key])) { $value = $parameters[$key]; unset($parameters[$key]); } return $value; } This fixed problem for me and I would expect this problem to be fixed in Laravel core. Maybe related: #12630, #12959.
gharchive/issue
2016-05-03T10:36:23
2025-04-01T06:44:46.007311
{ "authors": [ "tormit", "vlakoff" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/13416", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
252396754
[5.5] Releasing Lock correctly Laravel Version: 5.5.0-dev Description: If process A acquires a lock and takes longer than usual (longer than Lock::$seconds), the acquired lock will be expired and process B can acquire another lock. when process A is finished, it will remove the lock which has been already acquired by process B. More info & Solutions: Redis: https://redis.io/topics/distlock (Section: Correct implementation with a single instance) https://github.com/symfony/lock/blob/master/Store/RedisStore.php Memcached: http://php.net/manual/en/memcached.cas.php https://github.com/symfony/lock/blob/master/Store/MemcachedStore.php Steps To Reproduce: // process A $lock = new RedisLock($redis, 'lock', 1); $lock->block(); // A heavy process/request timeout > 1 second $lock->release(); // process B $lock = new RedisLock($redis, 'lock', 1); $lock->block(); // unsafe Changes The abstract class Lock should be changed because method acquire should return a unique value and also, method release should accept an argument for this unique value. Or we can use another class like Symfony\Component\Lock\Key to keep the token. So WDYT? For suggestions and feature requests please use the https://github.com/laravel/internals repo. Basically the point here is that you should pick your lock timeout based on the process estimate maximum execution time.
gharchive/issue
2017-08-23T19:51:35
2025-04-01T06:44:46.013366
{ "authors": [ "alibo", "themsaid" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/20709", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
279579365
When page expires, a misleading message is being displayed Laravel Version: 5.5.# PHP Version: 7.1 Database Driver & Version: Description: The page has expired due to inactivity. Please refresh and try again. I just feel that this is very misleading, because refreshing never helps - you just get the same message - you are stuck in an infinite loop for eternity. Steps To Reproduce: Be a long time logged in an auth page and try to submit a form after that. (token expires) "Refresh and try again" is definitely the wrong suggestion in this case. What do you think is a better message? Technically though if you reload the page, you no longer have a session - so your app should redirect them to the auth page? Not in this situation. For example, if you are on the login page and you submit the form after session expires, you get this message. Refreshing the page simply resends the POST login attempt, giving the same error ad infinitum. You have to actually submit a new GET request to the login page before you'll be able to login again. I've always just handled this at the app level so the message is never displayed, but can see how it can be confusing out-of-the-box. - Please refresh and try again. + Please revisit the page and try again. ? Maybe Laravel could set a Cookie or store some session data, and then when the user refreshes, it redirects to the correct GET route? The whole problem is this page could be triggered by an expired session (and therefore not CSRF) - so that wont work. For suggestions and feature requests please use the https://github.com/laravel/internals repo. I feel like it qualifies as a bug, to intentionally show a message to a user, which directs them to do something that can't possibly work. Seems like you could provide a "click here" link to the previous url and tweak the wording to say ... "try" refreshing the page or click here. Or simply remove the misleading instruction altogether. You can set this in your app though. The page is designed to be overriden by creating a file at: app/resources/views/errors/419.blade.php So just make it whatever you want. Absolutely and that's why I've never ran across this issue. I handle this in a completely custom way. Just strange that the default suggestion doesn't work and may as well say "blink 3 times and cough". It'll accomplish the same thing. However, no reason to continue the conversation. It definitely looks better than the previous default. @laurencei Yeah, this 419.blade solves my issue - I can work from here easily . Of course it could be still a little bit more user friendly out of the box, but maybe this issue will have some impact for future changes regarding this "misleading feedback". Anyway, thanks for the great suggestion.
gharchive/issue
2017-12-06T00:20:38
2025-04-01T06:44:46.021623
{ "authors": [ "CupOfTea696", "devcircus", "laurencei", "neorganic", "themsaid" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/22315", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
314719298
[5.5] Transactions do not commit if a sub-transaction has started Laravel Version: 5.5.40 PHP Version: 7.1.16 Database Driver & Version: MySQL 5.6.39 Description: Laravel does not commit transactions if a sub-transaction has begun, nor does it give the ability to commit only a specific sub-transaction Steps To Reproduce: I have code that essentially looks like this: DB::beginTransaction(); $this->doAThing(); DB::commit(); public function doAThing() { // Do something here Event::fire(new MyEvent($this->thing, $something)); } class MyEventListener { public function handle($event) { DB::beginTransaction(); // Do something here DB::commit(); } } When all of my code finishes, a ROLLBACK is performed by the server because no COMMIT was actually run. Because of the nature of DB::commit, commits only happen if we're at the top-level transaction. This is specifically a problem when using the sync QUEUE_DRIVER, but could cause issues elsewhere. Looking at the code, I'm fairly certain that 5.6 would be affected by this as well. If transactions are not allowed to overlap, shouldn't a second call to DB::beginTransaction() trigger an error of some kind? Currently, Laravel creates a SAVEPOINT if they're supported. But there's no way in code to say that I want to commit a savepoint, only roll one back. Transactions are allowed to be nested, but not overlap. Perhaps I'm using the wrong word to explain it. Think of xml; you can have <a><b></b></a> but not <a><b></a></b>. The logic is that all nested transactions must complete successfully to have the outer one succeed. Based on that, the nested transactions doesn't really do anything, if any of them fail the outer one will fail. They are however very nice to have to let any code act as if they do things transactional without having to check if there's an active transaction or not.
gharchive/issue
2018-04-16T16:00:01
2025-04-01T06:44:46.026987
{ "authors": [ "gms8994", "sisve" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/23902", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
534417854
Blade directive using quoted parameter value Laravel Version: 6.6.2 PHP Version: 7.2.25 Description: When I declare a custom blade directive, the argument's value is wrapped around '', which is not what I want. For example if I pass the string 'gdpr-cookie-notice/script.js' as the argument, I'm expecting $script to be "gdpr-cookie-notice/script.js" and not "'gdpr-cookie-notice/script.js'" when using dd(); Steps To Reproduce: I have made a BladeServiceProvider to declare some directives like this one: public function boot() { Blade::directive('script', function ($script) { dd($script); }); } and use it in my views like this: @script('gdpr-cookie-notice/script.js') This is the way custom directives work. You don't pass in arguments which you can immediately use. You pass in arguments which you can compile as a php execution. See the docs here: https://laravel.com/docs/6.x/blade#extending-blade That should do also. Thanks for the reply.
gharchive/issue
2019-12-07T15:52:50
2025-04-01T06:44:46.031219
{ "authors": [ "driesvints", "ionesculiviucristian" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/30785", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2484703079
Broadcasting via Pusher failing in production Laravel Version 11.21.0 PHP Version 8.3.0 Database Driver & Version 10.6.18-MariaDB-cll-lve - MariaDB Server Description Broadcastng works in development on Windows machine but fails in production with the following errors in the console app-wRZfXujE.js:13 WebSocket connection to 'wss://ws-.pusher.com/app/?protocol=7&client=js&version=8.4.0-rc2&flash=false' failed: dashboard:1 Access to XMLHttpRequest at 'https://sockjs.pusher.com/pusher/app//56/78aop8kj/xhr_streaming?protocol=7&client=js&version=8.4.0-rc2&t=1724514514797&n=1' from origin 'https://mydomain.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. app-wRZfXujE.js:12 POST https://sockjs.pusher.com/pusher/app//56/78aop8kj/xhr_streaming?protocol=7&client=js&version=8.4.0-rc2&t=1724514514797&n=1 net::ERR_FAILED dashboard:1 Access to XMLHttpRequest at 'https://sockjs.pusher.com/pusher/app//226/sqf2rv95/xhr?protocol=7&client=js&version=8.4.0-rc2&t=1724514518806&n=2' from origin 'https://mydomain.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. app-wRZfXujE.js:12 POST https://sockjs.pusher.com/pusher/app//226/sqf2rv95/xhr?protocol=7&client=js&version=8.4.0-rc2&t=1724514518806&n=2 net::ERR_FAILED Here is what my setup looks like .env BROADCAST_CONNECTION=pusher PUSHER_APP_ID="xxxx" PUSHER_APP_KEY="xxxxx" PUSHER_APP_SECRET="xxxx" PUSHER_HOST= PUSHER_PORT=443 PUSHER_SCHEME="https" PUSHER_APP_CLUSTER="eu" VITE_APP_NAME="${APP_NAME}" VITE_PUSHER_APP_KEY="${PUSHER_APP_KEY}" VITE_PUSHER_HOST="${PUSHER_HOST}" VITE_PUSHER_PORT="${PUSHER_PORT}" VITE_PUSHER_SCHEME="${PUSHER_SCHEME}" VITE_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}" echo.js import Echo from 'laravel-echo'; import Pusher from 'pusher-js'; window.Pusher = Pusher; window.Echo = new Echo({ broadcaster: 'pusher', key: import.meta.env.VITE_PUSHER_APP_KEY, cluster: import.meta.env.VITE_PUSHER_APP_CLUSTER, forceTLS: true }); bootstrap/app.php <?php declare(strict_types=1); use Illuminate\Foundation\Application; use Illuminate\Foundation\Configuration\Exceptions; use Illuminate\Foundation\Configuration\Middleware; return Application::configure(basePath: dirname(__DIR__)) ->withRouting( web: __DIR__ . '/../routes/web.php', channels: __DIR__ . '/../routes/channels.php', commands: __DIR__ . '/../routes/console.php', health: '/up', ) ->withMiddleware(function (Middleware $middleware): void {}) ->withExceptions(function (Exceptions $exceptions): void {})->create(); Steps To Reproduce Not sure it applies, but my settings are in the description From what I’m seeing it’s not look like a Laravel issue. Just an attempt to POST data directly from your front-end app to sockjs.pusher.com (which don’t seem to add the correct Access-Control-Allow-Origin header). I found this link talking about it (not directly about web-socket). But I think for the current situation, you’ll cannot do nothing else than just consume the service from your client-side, see details : https://laravel.com/docs/11.x/broadcasting#client-pusher-channels so by reaching your own back-end server. From what I’m seeing it’s not look like a Laravel issue. Just an attempt to POST data directly from your front-end app to sockjs.pusher.com (which don’t seem to add the correct Access-Control-Allow-Origin header). I found this link talking about it (not directly about web-socket). But I think for the current situation, you’ll cannot do nothing else than just consume the service from your client-side, see details : https://laravel.com/docs/11.x/broadcasting#client-pusher-channels so by reaching your own back-end server. @noefleury I have updated the issue, I ommited the listening on the frontend but it was there all along, I also add the following line enabledTransports: ['ws', 'wss'] and my cors error seems to go away but I still have this error: app-X48mdDuV.js:13 WebSocket connection to 'wss://ws-.pusher.com/app/?protocol=7&client=js&version=8.4.0-rc2&flash=false' failed: I guess its similar to these issues https://github.com/laravel/reverb/issues/153 and https://github.com/laravel/reverb/issues/78
gharchive/issue
2024-08-24T16:19:19
2025-04-01T06:44:46.039917
{ "authors": [ "noefleury", "thecyrilcril" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/52570", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
159919106
[5.1] BeanstalkdJob bury function delete job unexpectedly Job will be deleted whenever bury job function is called. Due to the release flag is not mark, every job goin to bury will be delete eventually. if (! $job->isDeletedOrReleased()) { $job->delete(); } Can find no indication that releasing is needed before burying. Please send link to documentation that states this is necessary. Yes. No releasing is needed, it's just a workaround so that $release set to true and the job won't be deleted. The bug is the job would never be bury, it's just deleted at last. I'm not sure releasing it is the answer then. We should solve the problem in another way. Yes. Thanks Taylor. it's affected 5.2 as well. Should we add a bury and isBuried function in Job class? But there is only beanstalkd got this bury function
gharchive/pull-request
2016-06-13T10:29:17
2025-04-01T06:44:46.042698
{ "authors": [ "taylorotwell", "yewjs" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/13963", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
230919244
Improved support for custom/changing directory structures The current ServiceProvider implementation assumes that Laravel is configured in its default state, and the way the publishes() method works makes it fairly dependent on the default application structure as described in Laravel's documentation. For example, because the documentation says to use: $this->publishes([ __DIR__.'/path/to/views' => resource_path('views/vendor/courier'), ]); If Laravel ever decides to move the default views/ directory to somewhere else—say base_path('views')—all existing packages will have to update and drop backwards compatibility. This PR adds the following new ServiceProvider methods: publishesViews($path, $namespace, $group = null) publishesConfig($path, $namespace = null, $group = null) publishesTranslations($path, $namespace, $group = null) publishesPublicAssets($path, $namespace, $group = null) publishesMigrations($path, $group = null) Because each of these methods determines where to publish the files, they can easily be updated as the framework grows/changes over time. The other upside is that we can use the currently configured paths instead of the framework defaults where appropriate. So, in the example of views, if your config('view.paths') is set to base_path('views'), calling publishesViews(__DIR__.'/views', 'courier') will automatically publish to views/vendor/courier instead of resources/views/vendor/courier. The PR also adds a $publish flag to: mergeConfigFrom($path, $key, $publish = false) loadViewsFrom($path, $namespace, $publish = false) Which will automatically call publishesConfig and publishesViews for you respectively. Finally, this PR updates the loadViewsFrom method to use the config('view.paths') value rather than hard coding $this->app->resourcePath().'/views/vendor/'.$namespace. This is important because currently there is no way to override vendor view files if you've put your views directory anywhere other than resources/views/ (unless you manually add a call to addNamespace() for each package you've installed). This PR still needs to have tests added and will probably need some code style changes. I know that @taylorotwell closed https://github.com/laravel/framework/pull/18755 saying that you should just call loadViewsFrom multiple times, but that's not possible with 3rd party packages. Right now, the only way to load custom vendor views if your view path isn't at the default location is to call addNamespace for each package you have installed: view()->addNamespace('foo', base_path('views/vendor/foo')); view()->addNamespace('bar', base_path('views/vendor/bar')); // ... Actually, upon further investigation, you can't call addNamespace() inside your AppServiceProvider because the vendor service provider's call to loadViewsFrom will register the vendor views above the app views. So right now, there's no way to load vendor views from anywhere but resources/views/vendor/* without some serious hacks. Regarding your last comment, that depends on the order of your service providers and is easily fixed by adjusting the order. @taylorotwell good point. I guess my underlying point is that it's not a particularly easy thing to handle, and the fix is relatively benign. I personally find that for smaller teams, where there's not a huge separation between designers and developers, having views as a top-level directory is a lot nicer to work with. I imagine that I'm not alone in this, so it seems like it'd make sense for the framework to support that use-case without too much fuss. OK, I dropped the $publish parameter from loadViewsFrom and mergeConfigFrom because it could cause issues with boot vs register. At this point, I think this PR is ready to merge if y'all are good with it. This doesn't look quite right. You have that applicationNamespacedRegistered flag but it is only for one namespace, it will never be called again for any other namespaces. Seems wrong. Feel like this whole thing could be a bit simpler.
gharchive/pull-request
2017-05-24T04:45:15
2025-04-01T06:44:46.052915
{ "authors": [ "inxilpro", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/19321", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
303160429
[5.6] Fix dot notation in JSON translations @taylorotwell @sisve @tillkruss I previously used translations in the usual way, through php arrays. Dot notations works well, but at one stage of development I needed to build a js client, with prebuilded translations, and do not fetch them from the backend every time I booted client, and store it in Local storage. At this point, I decided that it's better to export the php arrays to json file, but I ran into the problem that the current method of getting the value from the key __('key') ... (new Translator)->getFromJson() only works in the single root level of the json object, and if not found key in json, start to search in php translations. My php source: // resources/lang/en/emails.php return [ 'hello' => 'Hello', 'account' => [ 'password_changed' => [ 'text' => 'You have succesfully changed your password.', 'title' => 'Your password has been changed.', ], ], ]; Its conversion to json: // resources/lang/en.json { "emails": { "hello": "Hello", "account": { "password_changed": { "text": "You have succesfully changed your password.", "title": "Your password has been changed." } } } } BUT: If we call __('emails.account.password_changed.text'), we take the php version of the translations, because this key is not found in json OK For example. Create new: // resources/lang/en.json { "emails": { "account": { "password_changed": { "text": "1", } }, "account.password_changed.text": "2", }, "emails.account.password_changed.text": "3", } and call __('emails.account.password_changed.text'), output will be 3. OK removing key with value 3: // resources/lang/en.json { "emails": { "account": { "password_changed": { "text": "1", } }, "account.password_changed.text": "2", }, } and call __('emails.account.password_changed.text') again output will be Your password has been changed.. neither 1 nor 2. Key not found in json, and fetching php value by the way, i just used the same Arr::get() method from php implementation in Translator class from get method I also wrote tests that show that getting a value from the root level works the same as before. and I can already imagine what the .json files look like to those who now use this feature: { "activity.columns.description": "Description", "attachments.upload_files": "Upload files", "emails.account.password_changed.text": "You have succesfully changed your password.", "emails.account.password_changed.title": "Your password has been changed." } single level json, for me like a hell Conclusion: This feature was introduced in 5.4, but not covered enough with tests, and has not been completed It's too late to fix the fix in 5.4, but we can still fix the situation, 5.5 LTS can still be edited. I suggest not a feature, but a fix of previously introduced in the stable, but the finished feature #23405, #23392 Dot notation in JSON transaltions doesn't make any sense.
gharchive/pull-request
2018-03-07T16:11:06
2025-04-01T06:44:46.060293
{ "authors": [ "VRuzhentsov", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/23426", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
303639291
[5.6] Add orWhere builder methods for day, month and year Hello, This adds some small useful methods to the query builder: orWhereDay orWhereMonth orWhereYear @ottoszika - could you submit a PR to https://github.com/laravel/docs for this?
gharchive/pull-request
2018-03-08T21:31:15
2025-04-01T06:44:46.062926
{ "authors": [ "laurencei", "ottoszika" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/23449", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
455981515
[5.8] Assert that the session contains a given piece of data using a closure Just like $response->assertViewHas() accepts a closure as a second parameter which is called to evaluate the contents of the view data, $response->assertSessionHas() could implement a similar behavior. What are you trying to test in your application that requires this? Sessions generally only contain simple values? Good question. I am actually storing an eloquent model in the session, so I can later retrieve it in a controller that uses Socialite. This model then will be associated with the Socialite user. The reason I'm not storing only the id of the model is that there are multiple models that can be associated with Socialite users. E.g. App\User, App\Team. I feel this is not the best way to achieve this, but I haven't found an elegant way for this use case: Example A: As a User I want to connect my personal Github account for myself. Example B: As a User I want to connect my company's Github account for my Team. In my tests I want to assert that the model in the session is the one I expect, so would like to do something like this: // Before $response->assertSessionHas('account'); $this->assertTrue($user->is(session('account'))); // After $response->assertSessionHas('account', function ($account) use ($user) { return $customer->is($user); }); Thank you for merging this! Do you have any better suggestions than putting eloquent models in the session for such cases?
gharchive/pull-request
2019-06-13T22:34:48
2025-04-01T06:44:46.066552
{ "authors": [ "sebdesign", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/28837", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
793672899
[8.x] PHPUnit 10 Support Pending PHPUnit v10 release. Maybe we should do the same actions trick you did for browserkit, here @driesvints? Maybe we should do the same actions trick you did for browserkit, here @driesvints? @driesvints phpunit.xml.dist will be renamed to phpunit.dist.xml See https://github.com/sebastianbergmann/phpunit/commit/07a022ad0548823b04c8fab073a7bff2fbcf9c8c @nuernbergerA we can't rename it until we remove support for older PHPUnit versions. @crynobone yeah you are right maybe it's a thing for 9.x 9.x still needs to support PHP 7.4, which PHPUnit 10 doesn't support. @crynobone @nuernbergerA even so: it doesn't seems to me that the old convention will be removed? @driesvints I think it will be removed in V12 so there is plenty of time Gonna close this until there's more news on PHPUnit 10's release date.
gharchive/pull-request
2021-01-25T19:43:29
2025-04-01T06:44:46.070602
{ "authors": [ "GrahamCampbell", "crynobone", "driesvints", "nuernbergerA" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/36043", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1076436221
[8.x] Enable to define assuming role in the Laravel's filesystem configuration file. Currently, in order to assume role and interact S3 storage through the Laravel filesystem S3 driver, using the AWS credentials file ~/.aws/credentials is necessary. In this way, the AWS SDK default credential provider chain is used. cf.Assume Role with Profile This PR enable to define assuming role in the Laravel's filesystem configuration file config/filesystems.php. The environment variables handled by this enhancement are also consistent with the naming conventions used by the AWS CLI. Variables Required role_arn * role_session_name * source_profile external_id Considered automatically completing 'role_session_name' if omitted, but decided to remain it as required. both following configuration styles are valid. AWS CLI compatible 'default' => [ 'key' => env('AWS_ACCESS_KEY_ID'), 'secret' => env('AWS_SECRET_ACCESS_KEY'), 'region' => env('AWS_DEFAULT_REGION'), ], 's3' => [ 'driver' => 's3', 'region' => env('AWS_DEFAULT_REGION'), 'bucket' => env('AWS_BUCKET'), 'source_profile' => 'deault', 'role_arn' => env('AWS_ROLE_ARN'), 'role_session_name' => env('AWS_ROLE_SESSION_NAME'), ], All in one place In the AWS CLI, if 'role_arn' is specified, 'source_profile' or 'credential_source' is required, but this enhancement does not dare to comply with this. 's3' => [ 'driver' => 's3', 'key' => env('AWS_ACCESS_KEY_ID'), 'secret' => env('AWS_SECRET_ACCESS_KEY'), 'region' => env('AWS_DEFAULT_REGION'), 'bucket' => env('AWS_BUCKET'), 'role_arn' => env('AWS_ROLE_ARN'), 'role_session_name' => env('AWS_ROLE_SESSION_NAME'), ], // I had test this commit. But I don't add test code in this commit, because AWS enveronment is nesessary to test. I'm not sure I'm comfortable understanding this and maintaining it. You can still achieve this without a core change by defining your own S3 configuration extension: https://laravel.com/docs/8.x/filesystem#custom-filesystems
gharchive/pull-request
2021-12-10T05:12:02
2025-04-01T06:44:46.076478
{ "authors": [ "fuga", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/39971", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1100550984
Change default behavior of unvalidated array keys Makes this the default behavior - new method added to opt-in to old behavior for easy backwards compatibility. Off the topic but Model::preventLazyLoading(! app()->isProduction()); Should be added to Skelton as well
gharchive/pull-request
2022-01-12T16:35:46
2025-04-01T06:44:46.078172
{ "authors": [ "ankurk91", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/40368", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1843181385
[10.x] Introduce Custom Validation Rules for Model and Model Fields with OnCreateRules and OnUpdateRules Attributes Attributes Allows the ability to add structured and machine-readable metadata information on declarations Database fields have types to define the kind of data that can be stored in them and to ensure data integrity and consistency. We usually do these types of setups in migrations. Different types represent different kinds of data, such as numbers, strings, dates, and more. By specifying the type of a field, the database can perform validation and enforce rules to make sure that only valid and appropriate data is stored in that field. This helps prevent errors, maintain data accuracy, and improve query performance. Additionally, field types play a role in optimizing the storage and retrieval of data in the database. By indicating the field types within migrations, we are effectively ensuring adherence to data integrity standards as data is stored, however, migrations are only executed once. Enabling users to update and adapt validation rules within the model offers a valuable advantage in responding to evolving application requirements. This feature empowers developers to easily customize validation rules based on changing business needs, without the need to modify the database structure. By providing a dynamic and agile approach to validation, developers can ensure data integrity while maintaining the flexibility to accommodate new regulations and user demands. This capability enhances the efficiency and adaptability of the validation process, ultimately contributing to a more robust and responsive application. Added: Introduced a new validation improvement for Eloquent models. Added the ability to define validation rules using custom attributes: OnCreateRules and OnUpdateRules. Added the validateUsing method in the Model class to support custom validation rules. Added property protected array $rules; Changed: Updated the booted method in the Model class to automatically validate on creation and updating. Added a new static call for method validateUsing inside __callStatic method to call the method dynamically and statically Context: This enhancement improves data validation for Eloquent models by allowing developers to define specific validation rules using custom attributes. The new attributes, OnCreateRules and OnUpdateRules, enable you to tailor validation rules for model creation and updates, respectively. By using these attributes, developers can fine-tune validation for individual models, properties, or attributes, giving more control over data integrity. This enhancement helps streamline validation logic and promote better code organization. Examples: Validation on Creation: use Illuminate\Database\Eloquent\Validations\OnCreateRules; #[OnCreateRules(['name' => 'required', 'email' => 'required|email'])] class User extends Model { // ... } Validation on Update: use Illuminate\Database\Eloquent\Validations\OnUpdateRules; #[OnUpdateRules(['email' => 'required|email'])] class User extends Model { // ... } Applying both attributes to the Model: use Illuminate\Database\Eloquent\Validations\OnUpdateRules; #[OnUpdateRules(['email' => 'required|email'])] #[OnCreateRules(['name' => 'required', 'email' => 'required|email'])] class User extends Model { // ... } Using on model fields: use Illuminate\Database\Eloquent\Validations\OnUpdateRules; #[OnUpdateRules(['email' => 'required|email'])] #[OnCreateRules(['name' => 'required', 'email' => 'required|email'])] class User extends Model { #[OnUpdateRules([ 'required'])] --> this is also valid protected string $name; #[OnUpdateRules(['email' => 'required|email'])] --> This is valid as long as the key matches the property #[OnCreateRules('required|email')] --> this is also valid protected string $email; } Custom Rules with Validation: $product = Product::validateUsing(function ($rules) { $rules['price'] = 'numeric|min:0'; return $rules; })->create($data); Automatic Validation on Creation and Updating: protected static function booted() { static::creating(function (self $model) { $model->getModelRules(useOnCreateRules: false); $model->getPropertyRules(useOnCreateRules: false); $model->validate(); }); static::updating(function (self $model) { $model->getModelRules(); $model->getPropertyRules(); $model->validate(); }); } Note: This pull request is a work in progress and is being submitted for initial review and feedback. Further improvements and adjustments will be made based on the feedback received. Please note that this does not enforce any developer to use this, this only start taking effect once the attributes are applied to the model or properties @taylorotwell Why are you closing PRs that add good features to the framework? This PR is surely better than removing a few files and adding a few variables to the env! ( Slim skeleton ) @taylorotwell I still feel that we need this feature in laravel, this can be very useful for day-to-day use
gharchive/pull-request
2023-08-09T12:55:42
2025-04-01T06:44:46.089318
{ "authors": [ "ntiyiso-rikhotso", "sobhan-m94" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/48010", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
583843185
Handle install, upgrade and rollback failure The Slack Bot should send a message when an install, upgrade, or other operation fails. It should clearly indicate that the operation has failed and it should attempt to provide debugging information. There should be a repeatable mechanism provided for forcing an install or upgrade into a failure so that this feature can be tested. The full list of Helm statuses is here. Here's a reliable way to cause a failure: » helm upgrade kubewise -n kubewise larder/kubewise --set slack.token="<api-token>" » helm install flux fluxcd/flux --set git.url=git@github.com:fluxcd/flux-get-started --namespace flux --set image.tag="xxx" --wait --timeout 10s Error: timed out waiting for the condition » helm history flux -n flux REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Sat Mar 28 17:53:09 2020 failed flux-1.2.0 1.18.0 Release "flux" failed: timed out waiting for the condition
gharchive/issue
2020-03-18T16:13:19
2025-04-01T06:44:46.161954
{ "authors": [ "dtuite" ], "repo": "larderdev/kubewise", "url": "https://github.com/larderdev/kubewise/issues/10", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1503585539
Updated README.md This pull request will fix some errors in the tutorial, and gives people a little more control over their lights. Changelog: [Line 41] Replaced 2020 with 2022 [Line 70] Replaced pip with pip3 for Python 3 [Line 95] Replaced python with python3 for Python 3 [Line 109] Added info about changing the default color @larochefoucald Thank you @larochefoucald ! I love contributing and helping with what I can, even though I am semi-new to coding and new to GitHub
gharchive/pull-request
2022-12-19T20:55:58
2025-04-01T06:44:46.168223
{ "authors": [ "AidanR8459" ], "repo": "larochefoucald/WizHook", "url": "https://github.com/larochefoucald/WizHook/pull/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1609871679
Update cloudwatch, dynamodb, kinesis, s3, sns, ... to 2.20.17 Updates software.amazon.awssdk:cloudwatch software.amazon.awssdk:dynamodb software.amazon.awssdk:kinesis software.amazon.awssdk:s3 software.amazon.awssdk:sns software.amazon.awssdk:sqs from 2.20.11 to 2.20.17. I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "software.amazon.awssdk" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "software.amazon.awssdk" } }] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #1001.
gharchive/pull-request
2023-03-04T17:42:10
2025-04-01T06:44:46.179214
{ "authors": [ "scala-steward" ], "repo": "laserdisc-io/fs2-aws", "url": "https://github.com/laserdisc-io/fs2-aws/pull/999", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1099668209
Update s3 to 2.17.109 Updates software.amazon.awssdk:s3 from 2.17.107 to 2.17.109. I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "software.amazon.awssdk", artifactId = "s3" } ] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #681.
gharchive/pull-request
2022-01-11T22:18:24
2025-04-01T06:44:46.182170
{ "authors": [ "scala-steward" ], "repo": "laserdisc-io/tamer", "url": "https://github.com/laserdisc-io/tamer/pull/678", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2014782011
Undocumented use of the BoldFont option? Description BoldFont = options seems to work, but the documentation does not mention about it. Is this working as intended, or should not work properly? Add info or delete as appropriate: I see that there are people do use this option, e.g., here. Minimal example demonstrating the issue \documentclass{article} \usepackage{unicode-math} \setmathfont{XITSMath-Regular.otf}[ BoldFont = XITSMath-Bold.otf, ] \begin{document} \[ a = b \] \end{document} does this work the same as the following? \documentclass{article} \usepackage{unicode-math} \setmathfont{XITSMath-Regular.otf} \setmathfont{XITSMath-Bold.otf}[range={bfup->up,bfit->it}] \begin{document} \[ a = b \] \end{document} Further details Only the latter use-case is described in the documentation. BoldFont works with \mathversion[bold], in cases where all the math formula should be bold not only the alphabetic symbols. @khaledhosny Thank you for the reply! Are you saying that the bold font specified by BoldFont is only used when the \mathversion{bold} is invoked, and will not be used with commands like \symbf? Exactly (and \boldmath which a shortcut for \mathversion{bold}). \symbf switch to the Unicode bold math alphabets, so works only on characters that have encoded bold versions. \documentclass{article} \usepackage{unicode-math} \setmathfont{XITSMath-Regular.otf} \setmathfont{XITSMath-Bold.otf}[version=bold] \begin{document} \[ a = b \] \[ \symbf{a = b} \] \[ \symbfit{a = b} \] \mathversion{bold} \[ a = b \] \end{document} I was going to open some new issue but will cuckoo here. (for reasons explained at the end, I may be not os eager for my issue to be solved). The bold math version default appears to not be coherent. \documentclass{article} \usepackage{fontspec} \setmainfont{TeX Gyre Termes} \usepackage{unicode-math} \setmathfont{XITS Math} \begin{document} \makeatletter \meaning\mv@normal \meaning\mv@bold \clearpage $a \mathrm{a} \mathbf{a}$ \mathversion{bold} $a \mathrm{a} \mathbf{a}$ \thispagestyle{empty} \showoutput \end{document} % Local variables: % TeX-engine: xetex % End: (my system has symlinks targeting the texmfdist/fonts to help xetex find fonts) We see in math version normal: \install@mathalphabet \mathrm {\select@group \mathrm \M@TU \TU/TeXGyreTermes(0)/m/n } \install@mathalphabet \mathbf {\select@group \mathbf \M@TU \TU/TeXGyreTermes(0)/b/n } versus in math version bold \install@mathalphabet \mathrm {\select@group \mathrm \M@OT1 \TU/XITSMath(1)/b/n } \install@mathalphabet \mathbf {\select@group \mathbf \M@OT1 \OT1/cmr/bx/n } This makes for very surprising behavior of \boldsymbol. Now discovering the BoldFont option in this issue I obtain in math version bold, using \setmathfont{XITSMath-Regular.otf} \setmathfont{XITSMath-Bold.otf}[version=bold] the following: \install@mathalphabet \mathrm {\select@group \mathrm \M@OT1 \TU/XITSMath-Bold.otf(1)/m/n } \install@mathalphabet \mathbf {\select@group \mathbf \M@OT1 \OT1/cmr/bx/n } and is \TU/XITSMath-Bold.otf(1)/m/n really different from \TU/XITSMath(1)/b/n ? Also I don't know if the \M@OT1 pieces are worrrying. My pre-conception (maybe from having done mathastext for pdflatex ten years ago) is that any change to \mv@normal must be paired if possible with a change in \mv@bold so this why I feel the \mv@bold is wrong. On the other hand it gives a me a trick to solve some problems with hat accent. Try \[\hat{a}_r, \mathrm{\hat{a}_s}, \mathbf{\hat{a}_t}, \boldsymbol{\hat{a}_u}, \boldsymbol{\mathrm{\hat{a}_s}}\] (in normal math version) So here I am abusing the fact that in the bold math version the effect of \mathrm will be for input a to become MATHEMATICAL BOLD SMALL A, which is upright. And the hat is correctly positioned. With lualatex things look better (which surprised me, I will explain next why): I was very surprised but turns out my real-life original had \usepackage{polyglossia} \setmainlanguage{english} which causes the following output: xelatex: lualatex
gharchive/issue
2023-11-28T16:03:37
2025-04-01T06:44:46.214008
{ "authors": [ "Zeta611", "hvoss49", "jfbu", "khaledhosny" ], "repo": "latex3/unicode-math", "url": "https://github.com/latex3/unicode-math/issues/624", "license": "LPPL-1.3c", "license_type": "permissive", "license_source": "github-api" }
941245514
[Feature + bump] bump + update and utils which allows for Structure.extend() for custom methods. I've bumped all dependencies and also included a Structure class which allows for extending existing features for custom methods from user. This needs a rebase This hasn't received any activity in a long time, so I'm closing this. If this is still something you or others are interested in, then please reopen the PR with an up to date version.
gharchive/pull-request
2021-07-10T12:58:40
2025-04-01T06:44:46.276104
{ "authors": [ "ColdtQ", "PapiOphidian" ], "repo": "lavacord/Lavacord", "url": "https://github.com/lavacord/Lavacord/pull/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
312795032
Feat version 应该先发版再合并的…… Coverage remained the same at 83.145% when pulling 79a1ffc0fda3b2ca2c4cae247052724fe1090f66 on feat-version into 89a2222d108ff2734caafa9866988d74c6462a5e on master.
gharchive/pull-request
2018-04-10T06:57:10
2025-04-01T06:44:46.277774
{ "authors": [ "coveralls", "easonyq" ], "repo": "lavas-project/lavas", "url": "https://github.com/lavas-project/lavas/pull/131", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
773059635
[Meshery] Implemented NATS subscriber Description Nats subscribing to meshsync. Getting data in the format of database schema . Notes for Reviewers What about the errors in the helper package. An issue should be created to solve that. Signed commits [x] Yes, I signed my commits. @kalradev Can you address to the CI errors @ramrodo as contributors run into initial e2e integration test failures, I wonder if you might offer some guidance (https://github.com/layer5io/meshery/pull/2130/checks?check_run_id=1599778169#step:10:318). // @vineethvanga18 @ramrodo as contributors run into initial e2e integration test failures, I wonder if you might offer some guidance (https://github.com/layer5io/meshery/pull/2130/checks?check_run_id=1599778169#step:10:318). // @vineethvanga18 @kalradev I downloaded your branch and the tests pass if you run the backend and frontend locally and separately. The issue is when you run the server vía npm run dev or npm run ci-test-integration, doing that there is an Internal Server Error. I recommend you to execute the project as same as in CI to debug and find the problem. @leecalcote Merging this for now as it does not break any functionality. good with this decision?
gharchive/pull-request
2020-12-22T16:06:34
2025-04-01T06:44:46.315661
{ "authors": [ "kalradev", "kumarabd", "leecalcote", "ramrodo" ], "repo": "layer5io/meshery", "url": "https://github.com/layer5io/meshery/pull/2130", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
645053364
[Chore] Reconcile img and assets folders; Consolidate to img folder. Current Behavior The img/readme and the /assets folders are duplicative (for the same purpose). These two folders need to be reconciled by deduplicating their contents. Desired Behavior All of these assets are consolidated into the /img/readme folder. @itsapoorvj, would you like to tackle this one? Sure, I will do that.
gharchive/issue
2020-06-25T00:06:24
2025-04-01T06:44:46.317775
{ "authors": [ "itsapoorvj", "leecalcote" ], "repo": "layer5io/service-mesh-performance-specification", "url": "https://github.com/layer5io/service-mesh-performance-specification/issues/42", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
831647460
Update main/master README This is an issue as the project now has grown beyond a quick README. It does need a structure and some good write up. #46 is good enough for now
gharchive/issue
2021-03-15T10:06:20
2025-04-01T06:44:46.318682
{ "authors": [ "layik" ], "repo": "layik/eAtlas", "url": "https://github.com/layik/eAtlas/issues/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
647531819
Add transaction types [x] Add basic transaction types. [x] Add new namespace ID for tail padding (i.e. padding after the last real message). Rendered: https://github.com/lazyledger/lazyledger-specs/blob/adlerjohn-transaction_types/specs/consensus.md https://github.com/lazyledger/lazyledger-specs/blob/adlerjohn-transaction_types/specs/data_structures.md Left one minor comment. I'd like to compare the tx types to the particular messages (this word is heavily overloaded in the different contexts we use it)/ Tx in the relevant cosmos sdk modules but this does not block merging this PR. The transactions here are based on the staking messages of the Cosmos SDK: https://docs.cosmos.network/master/modules/staking/03_messages.html. Some features are removed (one-transaction re-delagation, for example, since it adds protocol complexity that can be abstracted away at the wallet level).
gharchive/pull-request
2020-06-29T17:04:15
2025-04-01T06:44:46.328596
{ "authors": [ "adlerjohn" ], "repo": "lazyledger/lazyledger-specs", "url": "https://github.com/lazyledger/lazyledger-specs/pull/42", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
328604077
Update Bug Reports narrative to include user feedback/suggestions The Issue Updating the Bug Report functionality to include a feedback/suggestions narrative will allow users to submit recommendations/feedback through the app. System Configuration LBRY Daemon version: LBRY App version: LBRY Installation ID: Operating system: Anything Else Screenshots Hey. Id like to take a stab at this. I have a few questions. Am i addig a new function in the lbry-redux code and also making some changes to the /'/report' component? If so, I cloned the lbry-redux repo and did not find the Lbry.report_bug in that code. Where is it in the code ? @Bentley912 we plan to eventually replace the Lbry.report_bug call (this is actually in the lbrynet-daemon, not in redux) so it's okay to reuse the same form/call for this request at the moment. So it's mainly just wording that needs to be adjusted in a few places. Thanks for taking a shot at this! @tzarebczan if we're going to work on this I'd really like to kill lbry.report_bug as it shouldn't exist. Do we want these to go straight into the help desk? @kauffj we discussed using the helpscout api in the ticket here: https://github.com/lbryio/lbry-app/issues/1078 But regardless of how it's sent, I think these changes are separate. Ok. So, for now, I'll just change the language up a bit. When we get all that straightened out, Ill implement the helpscout from the API. Cool? Sounds good, thanks and welcome @Bentley912! @tzarebczan does this need to be added to the Changelog? It is a very minor change. Yep, we should always have a change log. Check out the previous entries for proper format. FIxed in https://github.com/lbryio/lbry-app/commit/ddf1397a2816d9ccfe7a64f444465c7cee87c127
gharchive/issue
2018-06-01T17:40:40
2025-04-01T06:44:46.347128
{ "authors": [ "Bentley912", "kauffj", "tzarebczan" ], "repo": "lbryio/lbry-app", "url": "https://github.com/lbryio/lbry-app/issues/1537", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
615883418
reset media position when at the end of video If you get to the end of a video, it stores that video position on refresh and makes you click the rewind button. @jeffslofish wanna take this one? Also worse with autoplay I'm looking into it now
gharchive/issue
2020-05-11T13:25:40
2025-04-01T06:44:46.348566
{ "authors": [ "jeffslofish", "tzarebczan" ], "repo": "lbryio/lbry-desktop", "url": "https://github.com/lbryio/lbry-desktop/issues/4174", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
873585303
Audio player suggestions. I see that there was this commit merged. Making it more possible to upload audio. Even tho you could technically upload audio even before that. I have a few suggestions to the audio player: Slim Player. There is no video. So an entire 16:9 player frame for an audio file is unnecessary. Instead something like a slim thing. Like a strip of audio with a tiny album image on the side. This would be a way better design. Or at least. Make it like so when I embed audio into text posts. Download link / button. I use LBRY Desktop to download videos and audios. But sometimes I want to give my songs for use to somebody else. Who uses only Odysee.com. With video it's easy. You press right mouse button. Then copy the location of the video. It works like this on this ogg audio file. But not on this mp3 audio file. Maybe bringing the old download button from LBRY.TV would be nice. Thanks for opening the issue. You can add these suggestions to: https://github.com/lbryio/lbry-desktop/issues/2757 The download link was covered under a separate discussion and probably not changing from it's current position.
gharchive/issue
2021-05-01T07:09:00
2025-04-01T06:44:46.352806
{ "authors": [ "JYamihud", "tzarebczan" ], "repo": "lbryio/lbry-desktop", "url": "https://github.com/lbryio/lbry-desktop/issues/5987", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
409969325
Change/add success threshold for YouTube conversion Update https://github.com/lbryio/lbry.io/blob/master/view/template/acquisition/youtube_status.php so that the FB and GA success event only fires if the sync was above a certain quality threshold. @kauffj What's the quality threshold? @finer9 @robvsmith can you guys agree on what subscriber and/or video view count you want this to trigger on and comment on the ticket with the answer? @finer9 still need threshold values for this @finer9 I'm closing this, please re-open with values if you want this done.
gharchive/issue
2019-02-13T19:42:21
2025-04-01T06:44:46.355219
{ "authors": [ "NetOperatorWibby", "kauffj" ], "repo": "lbryio/lbry.com", "url": "https://github.com/lbryio/lbry.com/issues/958", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
215749425
Kittenlang http://kittenlang.org I feel useful XD As you should! Thanks for the contribution!
gharchive/pull-request
2017-03-21T14:01:31
2025-04-01T06:44:46.448029
{ "authors": [ "CubeFromSpace", "leachim6" ], "repo": "leachim6/hello-world", "url": "https://github.com/leachim6/hello-world/pull/396", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
443237782
[Feature] Versioned files Hi everyone, I'd like to version the output css files (example: style.3a65d1.css). Could the compiler implement this feature? Thank you all. Definitely not the job of the compiler which is returning the plain text css content, not a filename. Could be in the Server class as an example, but however it's more the job of the caller to do this (being the Server class or you own application) CSS (and js) files shouldn't be saved with a version-id on disk. It makes the file handling only complicated. One way is to call them like this: <link rel="stylesheet" href="/themes/default/css/cms-min.css?v=db261"> A better way (but much more difficult) is to create a hash and use this for versioning AND integrity: https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity Out of scope for the library.
gharchive/issue
2019-05-13T07:08:16
2025-04-01T06:44:46.450890
{ "authors": [ "AdamPerkins", "Cerdic", "raffaelecarelle", "robocoder" ], "repo": "leafo/scssphp", "url": "https://github.com/leafo/scssphp/issues/685", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
565111876
Incompatibility with graphql-java 14.0 The constructor for Generator/TypeCache expects all objects of GraphQLType to have a getName() method. Which in previous versions (13 and before) of graphql-java this was true. In version 14, only the subclass GraphQLNamedType supports the getName() method. This is causing me dependency injection issues, now that another component is forcing gradle to update graphql-java to version 14. https://github.com/leangen/graphql-spqr/blob/df15e5b702fa010b802ca35c5fddde84299c2505/src/main/java/io/leangen/graphql/generator/TypeCache.java#L19 See the change in version 14. https://github.com/graphql-java/graphql-java/blob/578985f4ed81d154f1f83bbef6903aa88f3f1a59/src/main/java/graphql/schema/GraphQLType.java Is it possible to support graphql-java version 14? With a little direction, I would be willing to get a PR together that addresses this issue. Unfortunately, graphql-java v14 has a ton of breaking changes... so quite a few places needed work. The new API requires so much explicit casting everywhere it's painful to use 😖 Anyway, I have already prepared for this ahead of the v14 release, and pushed the code just now. As a general rule, though: never ever expect you can just upgrade graphql-java and have things working with SPQR. All graphql-java releases have breaking changes, and even if everything seems to work, there's no guarantees. Just out of curiosity, what is forcing you to update to v14? Unfortunately, graphql-java v14 has a ton of breaking changes... so quite a few places needed work. The new API requires so much explicit casting everywhere it's painful to use Anyway, I have already prepared for this ahead of the v14 release, and pushed the code just now. As a general rule, though: never ever expect you can just upgrade graphql-java and have things working with SPQR. All graphql-java releases have breaking changes, and even if everything seems to work, there's no guarantees. Just out of curiosity, what is forcing you to update to v14? I have updated to the latest version of spqr (0.10.1) and the issue does not seem to fixed. java.lang.NoSuchMethodError: graphql.schema.GraphQLType.getName()Ljava/lang/String; at io.leangen.graphql.generator.TypeCache.(TypeCache.java:19) at io.leangen.graphql.generator.BuildContext.(BuildContext.java:106) at io.leangen.graphql.GraphQLSchemaGenerator.generate(GraphQLSchemaGenerator.java:992) I need the 14 version of graphql-java for apollo-federation. Are there any plans on fixing this? I guess, changes went into 0.10.2-SNAPSHOT, when would be the release of 0.10.2? Is there a plan to release 0.10.2 with graphql-java v14 support?
gharchive/issue
2020-02-14T05:20:34
2025-04-01T06:44:46.469866
{ "authors": [ "EXPEvgowdaks", "biancaL", "kaqqao", "kristofarkas", "psyklopz" ], "repo": "leangen/graphql-spqr", "url": "https://github.com/leangen/graphql-spqr/issues/337", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2261128009
chore(Data/List/Perm): some nthLe -> get migration Extracted from #12350. Thanks! :tada: bors merge Thank you for the review!
gharchive/pull-request
2024-04-24T12:01:25
2025-04-01T06:44:46.528220
{ "authors": [ "grunweg", "urkud" ], "repo": "leanprover-community/mathlib4", "url": "https://github.com/leanprover-community/mathlib4/pull/12397", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2760052819
feat(MetricSpace): add Metric.biInter_gt_ball etc bors merge
gharchive/pull-request
2024-12-26T18:52:54
2025-04-01T06:44:46.529877
{ "authors": [ "PatrickMassot", "urkud" ], "repo": "leanprover-community/mathlib4", "url": "https://github.com/leanprover-community/mathlib4/pull/20262", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1437751622
feat: port Algebra.Group.Units mathlib hash: 7cca171008afb30576d2d4c51173700a780c23d0 Ported by explicitly setting to_additive tags on .match_1 lemmas for iff statements. [x] depends on: #600 I have had a look at this and made a bunch of changes, so someone else should now review. Ported by explicitly setting to_additive tags on .match_1 lemmas for iff statements. This issue should be fixed now in master. Could you verify here? bors d+ bors r+
gharchive/pull-request
2022-11-07T04:33:29
2025-04-01T06:44:46.532381
{ "authors": [ "j-loreaux", "pechersky", "semorrison" ], "repo": "leanprover-community/mathlib4", "url": "https://github.com/leanprover-community/mathlib4/pull/549", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1787340785
feat: port Archive.OxfordInvariants.2021summer.Week3P1 I had to change the name from "2021summer" to "Summer2021" because apparently lake doesn't handle file names starting with numbers well.. Thanks :tada: bors merge bors d+ Sorry for the bors mess, I clicked the wrong buttons in the wrong place. Should be good to go now. bors r+
gharchive/pull-request
2023-07-04T07:33:55
2025-04-01T06:44:46.535116
{ "authors": [ "jcommelin", "mo271" ], "repo": "leanprover-community/mathlib4", "url": "https://github.com/leanprover-community/mathlib4/pull/5705", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1964789768
refactor: generalize Abs lemmas from rings to groups Four lemmas are moved from Algebra/Order/Monoid/Defs.lean to Algebra/Order/Group/Defs.lean and generalized Four lemmas are moved from Algebra/Order/Ring/Abs.lean to Algebra/Order/Group/Abs.lean and generalized Four lemmas are added in Algebra/Order/Monoid/Defs.lean. They're special cases of one_le_pow_iff, but I can't import the file without offending assert_not_exists. @YaelDillies said on Zulip: Anyway I have a full rewrite of the files in question coming up, so don't worry too much about it. Does it include what's changed in this PR? I know that lemmas here could probably be generalized to CovariantClass without using the bundled algebra+order classes, but I don't have time to do that. !bench bors merge
gharchive/pull-request
2023-10-27T05:23:53
2025-04-01T06:44:46.540362
{ "authors": [ "PatrickMassot", "alreadydone" ], "repo": "leanprover-community/mathlib4", "url": "https://github.com/leanprover-community/mathlib4/pull/7976", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1967401632
feat: add Set.eqOn_image The lemma is a duplicate (with LHS and RHS swapped).
gharchive/pull-request
2023-10-30T03:25:35
2025-04-01T06:44:46.541991
{ "authors": [ "urkud" ], "repo": "leanprover-community/mathlib4", "url": "https://github.com/leanprover-community/mathlib4/pull/8027", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2136880119
Split sym1 into smaller tactics Description: A macro-style tactic is chosen instead of the monad style so that developers can easily extend the tactics. However, this will change for a small set of tactics if they need more fine-grained manipulations. License: By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Comments addressed.
gharchive/pull-request
2024-02-15T15:52:18
2025-04-01T06:44:46.543407
{ "authors": [ "aqjune-aws" ], "repo": "leanprover/LNSym", "url": "https://github.com/leanprover/LNSym/pull/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2032412879
feat: per-function termination hints This change moves termination_by and decreasing_by next to the function they apply to simplify the syntax of termination_by apply the decreasing_by goal to all goals at once, for better interactive use. See the section in RELEASES.md for more details and migration advise. This is a hard breaking change, requiring developers to touch every termination_by in their code base. We decided to still do it as a hard-breaking change, because supporting both old and new syntax at the same time would be non-trivial, and not save that much. Moreover, this requires changes to some metaprograms that developers might have written, and supporting both syntaxes at the same time would make their migration harder. Std and mathlib updates are prepared at https://github.com/leanprover/std4/pull/446 https://github.com/leanprover-community/mathlib4/pull/9019 Fixes #2921 and #3081. This is getting ready, and it’s a good time for extra eyeballs. Two concrete questions for review: Are the changes to the syntax sensible? Here are the definitions in concise form: namespace Termination def terminationBy := leading_parser ppDedent ppLine >> "termination_by " >> optional (atomic (many (ppSpace >> (ident <|> "_")) >> " => ")) >> termParser def decreasingBy := leading_parser ppDedent ppLine >> "decreasing_by " >> Tactic.tacticSeqIndentGt def suffix := leading_parser optional terminationBy >> optional decreasingBy end Termination used in three places: def letRecDecl := leading_parser - optional Command.docComment >> optional «attributes» >> letDecl + optional Command.docComment >> optional «attributes» >> letDecl >> Termination.suffix @[run_builtin_parser_attribute_hooks] def matchAltsWhereDecls := leading_parser - matchAlts >> optional whereDecls + matchAlts >> Termination.suffix >> optional whereDecls def declValSimple := leading_parser - " :=" >> ppHardLineUnlessUngrouped >> termParser >> optional Term.whereDecls + " :=" >> ppHardLineUnlessUngrouped >> termParser >> Termination.suffix >> optional Term.whereDecls Expected to hit .missing and .null? When parsing this syntax I am using this code: def elabTerminationHints {m} [Monad m] [MonadError m] (stx : TSyntax ``suffix) : m TerminationHints := do if let .missing := stx.raw then return { TerminationHints.none with ref := stx } if stx.raw.matchesNull 0 then return { TerminationHints.none with ref := stx } match stx with | `(suffix| $[$t?:terminationBy]? $[$d?:decreasingBy]? ) => do let termination_by? ← t?.mapM fun t => match t with | `(terminationBy|termination_by $vars* => $body) => if vars.isEmpty then throwErrorAt t "no extra parameters bounds, please omit the `=>`" else pure {ref := t, vars, body} | `(terminationBy|termination_by $body:term) => pure {ref := t, vars := #[], body} | _ => throwErrorAt t "unexpected `termination_by` syntax" let decreasing_by? ← d?.mapM fun d => match d with | `(decreasingBy|decreasing_by $tactic) => pure {ref := d, tactic} | _ => throwErrorAt d "unexpected `decreasing_by` syntax" return { ref := stx, termination_by?, decreasing_by?, extraParams := 0 } | _ => throwErrorAt stx s!"Unexpected Termination.suffix syntax: {stx} of kind {stx.raw.getKind}" If I remove the check for .missing or null nodes it fails in some examples. Is that worrying and a sign that somewhere I am doing something wrong, or is that expected? The grammar changes mostly look sensible. Do we need indentation awareness to separate e1 and e2 in the following case? let rec f := ... termination_by e1 e2 If I remove the check for .missing or null nodes it fails in some examples. Is that worrying missing is produced by the parser iff there was a parse error, in which case we skip reporting most elaboration errors anyway, so one would think that the default branch in the match would be sufficient. And on well-formed syntax, the first pattern should subsume the matchesNull, yes. So might be worth looking into, e.g. printing the full syntax in the matchesNull case. Do you think this syntax tree comes straight from the parser or some transformation in between? Re indentation awareness: Good question, I’ll write up some test cases later. Re .missing and null: I do get test failures without both of these clauses (run 1, run 2): 343.lean:18:0-19:54: error: Unexpected Termination.suffix syntax: [] of kind null 1235.lean:1:0-1:26: error: Unexpected Termination.suffix syntax: <missing> of kind missing I’ll dig deeper and try to understand what’s going on. missing is produced by the parser iff there was a parse error, in which case we skip reporting most elaboration errors anyway, so one would think that the default branch in the match would be sufficient. Found (one of it). The code opaque f (a b : Nat) : Nat is transformed by this code https://github.com/leanprover/lean4/blob/d1a15dea03f935457c6b3cbe99af029652a2584f/src/Lean/Elab/DefView.lean#L120-L131 This manually constructed syntax of course has .missing when you try to get the termination suffix (stx[2] later) or the where-clauses. Manually constructing the expected syntax, i.e. (Command.declValSimple ":=" … (Termination.suffix [] []) []) is probably tedious. How important is the mkAtomFrom around :=? If not, this would probably work more reliable: --- a/src/Lean/Elab/DefView.lean +++ b/src/Lean/Elab/DefView.lean @@ -124,7 +124,7 @@ def mkDefViewOfOpaque (modifiers : Modifiers) (stx : Syntax) : CommandElabM DefV | some val => pure val | none => let val ← if modifiers.isUnsafe then `(default_or_ofNonempty% unsafe) else `(default_or_ofNonempty%) - pure <| mkNode ``Parser.Command.declValSimple #[ mkAtomFrom stx ":=", val ] + `(Parser.Command.declValSimple| := $val) return { ref := stx, kind := DefKind.opaque, modifiers := modifiers, declId := stx[1], binders := binders, type? := some type, value := val Or is there a strong reason to avoid syntax quotations in core code (bootstrapping issues maybe)? With 37ab645, most tests work again, but four still fail: Test project /home/jojo/build/lean/lean4/build/release/stage1 Start 270: leantest_doLetLoop.lean Start 319: leantest_have.lean Start 1718: leaninteractivetest_completion5.lean Start 1719: leaninteractivetest_completion6.lean 1/4 Test #270: leantest_doLetLoop.lean ................***Failed 0.06 sec --- doLetLoop.lean.expected.out 2023-12-02 12:17:16.269185699 +0100 +++ doLetLoop.lean.produced.out 2023-12-22 14:41:17.452879698 +0100 @@ -1,2 +1,2 @@ doLetLoop.lean:4:0: error: unexpected end of input -doLetLoop.lean:2:4-2:5: warning: declaration uses 'sorry' +doLetLoop.lean:2:0-3:8: error: Unexpected Termination.suffix syntax: <missing> of kind missing ERROR: file doLetLoop.lean.produced.out does not match doLetLoop.lean.expected.out 2/4 Test #319: leantest_have.lean .....................***Failed 0.06 sec --- have.lean.expected.out 2023-12-02 12:17:15.938183664 +0100 +++ have.lean.produced.out 2023-12-22 14:41:17.455879690 +0100 @@ -1,7 +1,4 @@ have.lean:2:19-4:7: error: unexpected token 'example'; expected term -have.lean:2:18-2:19: error: don't know how to synthesize placeholder -context: -⊢ False have.lean:7:2-7:3: error: type mismatch f has type ERROR: file have.lean.produced.out does not match have.lean.expected.out 3/4 Test #1718: leaninteractivetest_completion5.lean ...***Failed 1.91 sec --- completion5.lean.expected.out 2023-12-12 11:26:11.359674934 +0100 +++ completion5.lean.produced.out 2023-12-22 14:41:19.301875008 +0100 @@ -1,7 +1,3 @@ {"textDocument": {"uri": "file:///completion5.lean"}, "position": {"line": 9, "character": 15}} -{"items": - [{"label": "b1", "kind": 5, "detail": "C → String"}, - {"label": "f1", "kind": 5, "detail": "C → Nat"}, - {"label": "f2", "kind": 5, "detail": "C → Bool"}], - "isIncomplete": true} +{"items": [], "isIncomplete": true} ERROR: file completion5.lean.produced.out does not match completion5.lean.expected.out 4/4 Test #1719: leaninteractivetest_completion6.lean ...***Failed 1.94 sec --- completion6.lean.expected.out 2023-12-12 11:26:11.359674934 +0100 +++ completion6.lean.produced.out 2023-12-22 14:41:19.331874935 +0100 @@ -1,12 +1,6 @@ {"textDocument": {"uri": "file:///completion6.lean"}, "position": {"line": 12, "character": 15}} -{"items": - [{"label": "b1", "kind": 5, "detail": "C → String"}, - {"label": "f1", "kind": 5, "detail": "C → Nat"}, - {"label": "f2", "kind": 5, "detail": "C → Bool"}, - {"label": "f3", "kind": 5, "detail": "D → Bool"}, - {"label": "toC", "kind": 5, "detail": "D → C"}], - "isIncomplete": true} +{"items": [], "isIncomplete": true} {"textDocument": {"uri": "file:///completion6.lean"}, "position": {"line": 21, "character": 4}} {"items": ERROR: file completion6.lean.produced.out does not match completion6.lean.expected.out These all relate to partially parsed terms, for example doLetLoop.lean is set_option showPartialSyntaxErrors true def f : IO Unit := do if let and now elabTerminationHints complains. Probably too loudly given that this is a partial syntax tree anyways? How should I proceed here? Make elabTerminationHints more liberal again and accept .missing? Or is there a more principled way? For the cases where I hit error: Unexpected Termination.suffix syntax: [] of kind null I have the following theory: Affected are uses of the unif_hint command, which is defined in Init.NotationExtras: https://github.com/leanprover/lean4/blob/d1a15dea03f935457c6b3cbe99af029652a2584f/src/Init/NotationExtra.lean#L70-L77 When compiling this code, we are still using stage0 with the old syntax (despite parseQuotWithCurrentStage := true), and the old syntax has the optional whereDelcs in the splot, and that’s the [] we see. The other failures are related to lakefiles parsing, so probably similar. So for this one the way forward would probably to keep elabTerminationHints liberal at first, merge the PR, do a stage0 update, and then make it stricter? When compiling this code, we are still using stage0 with the old syntax (despite parseQuotWithCurrentStage := true) Right, the flag only affects code where the changed parser was imported. Your suggested approach makes sense. How should I proceed here? Make elabTerminationHints more liberal again and accept .missing? Or is there a more principled way? The test output suggests that the exception is preventing us from ever elaborating the body, in that case we need to be liberal and just accept .missing here, yes. The parse error makes sure that in the end we will consider this command as failed in any case. The grammar changes mostly look sensible. Do we need indentation awareness to separate e1 and e2 in the following case? let rec f := ... termination_by e1 e2 It seems to work: The termination_by only picks up more expressions if they are indented with regard to the let rec; same as the body of f it seems. Here's some questions that arise for me when reading RELEASES.md. The termination_by clause must bind exactly those parameters that are not already bound by the function header. If there are none, the => can be omitted Just to double-check - this means that def foo (x : Nat) : Nat -> Nat := ... termination_by x - y is not allowed, right? Even if the RHS of the definition is constructing a function by well-founded recursion on x? All the examples omit => when they may do so. Why is it not mandatory to do so when no names are bound? Lean also doesn't accept fun => 5 as a natural number, after all. The examples in the migration guide don't seem to include any let rec cases - this will be useful to have. Examples will also help seed the community with a default code style WRT things like indentation, as we haven't had termination_by in the middle of definitions before. Finally, what are the indentation rules for the new termination_by and decreasing_by? There's some hints in this conversation here, but users shouldn't have to read our PRs to discover this kind of thing. A written specification for the resulting syntax is important - even if it doesn't all belong in the release notes, it should be available somewhere in the doc directory. If there are none, the => can be omitted That's just wrong, should say “must be”. Thanks for catching! Will add more examples as well. I have a PR that mentions these termination hints in the docs in https://github.com/leanprover/lean4/pull/3016, not yet updated to the new rules. I'll update that and merge after I merged this. I improved the release notes, thanks for the suggestions. It didn't feel quite right to talk much about indentation there: In the final version, there isn’t much to worry about indentation, because we now place termination_by before the where and the ambiguities (and indentation-based resolutions) that were discussed in this PR are obsolete. My indentation worries are primarily about when the termination_by or decreasing_by occur in an indented context. My understanding is that there's no rules governing their indentation, but we hope that they'll typically be indented at the same level as either the name in the where block that they apply to, or the level of the let keyword that defines the function in question. Is that right? If so, I think it's worth mentioning a sentence like this, because users will be faced with the task of migrating their code and needing to put the keywords somewhere. Thanks for the example of the let rec, that really helps! My indentation worries are primarily about when the termination_by or decreasing_by occur in an indented context. My understanding is that there's no rules governing their indentation, but we hope that they'll typically be indented at the same level as either the name in the where block that they apply to, or the level of the let keyword that defines the function in question. Is that right? If so, I think it's worth mentioning a sentence like this, because users will be faced with the task of migrating their code and needing to put the keywords somewhere. They can indent it as they like; the migration guide isn’t really the place for giving style advice (besides implicitly in example, if at all), isn’t it? Maybe that stance is heavily influence by how I approach indentation, namely intuitively. I can’t explain the rules around indentation in Haskell (let alone Lean), but I hardly ever have trouble with writing correctly indented code. And personally I also don’t care if people write foo where bar (x : Nat) := … termination_by x or foo where bar (x : Nat) := … termination_by x or foo where bar (x : Nat) := … termination_by x or whatever. I think that many users may assume that e.g. the RHS of termination_by should be at least as indented as the token, and if this isn't required, it's a good idea to say something like "there are no additional indentation rules" and then show a reasonable default style in the examples. But I don't think this very strongly, the lack of rules may be reason enough to omit the statement :) Rebased this PR (against all my strongly held convictions) because it needs a stage0 update in the middle. Needs to be rebase-merged. This seems to be good to go, but I’ll wait for mathlib CI to work again, to make sure that the std and mathlib PRs are up-to-date, and check back with Scott about how to merge this with the least release complications.
gharchive/pull-request
2023-12-08T10:41:45
2025-04-01T06:44:46.572817
{ "authors": [ "Kha", "david-christiansen", "nomeata" ], "repo": "leanprover/lean4", "url": "https://github.com/leanprover/lean4/pull/3040", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2290962711
fix: hovers on binders with metavariables this fixes #4078 Mostly as a learning experience, I wanted to be see if I can fix a bug this part of the code based. I wonder if this is really the right fix, because I don’t see much withSaveInfoContext in this module, so happy to be educated! @kmill maybe?
gharchive/pull-request
2024-05-11T15:35:53
2025-04-01T06:44:46.575336
{ "authors": [ "nomeata" ], "repo": "leanprover/lean4", "url": "https://github.com/leanprover/lean4/pull/4137", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2557046885
chore: deprecate := variants of inductive and structure Deprecates inductive ... :=, structure ... :=, and class ... := in favor of the ... where variant. Currently this syntax produces a warning, controlled by the linter.deprecated option. Breaking change: modifies Lean.Linter.logLintIf to use Lean.Linter.getLinterValue to determine if a linter value is set. This means that the linter.all option now is taken into account when the linter option is not set. Part of #5236 @adomani @grunweg I noticed that logLintIf didn't take into account linter.all, and I modified the logic here. It seems to me that it was meant to respond to linter.all, but please let me know if it's not supposed to. Kyle, thanks for the heads up! I think that I never used logLintIf (at least not directly), but I'll keep an eye out for linters misbehaving! :smile:
gharchive/pull-request
2024-09-30T15:44:00
2025-04-01T06:44:46.578308
{ "authors": [ "adomani", "kmill" ], "repo": "leanprover/lean4", "url": "https://github.com/leanprover/lean4/pull/5542", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1955920239
feat: de-mathlib Nat.binaryRec Some of the definitions have been modified to be faster. [ ] depends on: #464 [ ] depends on: #465 awaiting-review What's the status of this PR in respect to #366? I'm not sure where these diffs are. It might just be some mess from the merge. What's the status of this PR in respect to #366? In that PR we got: /-- An induction principal that works on divison by two. -/ noncomputable def div2Induction {motive : Nat → Sort u} (n : Nat) (ind : ∀(n : Nat), (n > 0 → motive (n/2)) → motive n) : motive n := by induction n using Nat.strongInductionOn with | ind n hyp => apply ind intro n_pos if n_eq : n = 0 then simp [n_eq] at n_pos else apply hyp exact Nat.div_lt_self n_pos (Nat.le_refl _) but these don't seem directly related. I replaced it with binaryRecs and did some golf. Strange. Please ignore my comments then. Well, what should I do now? Std.Data.Nat.Bitwise is not there now. It seems that Std.Data.Nat.Bitwise is not there now. What should I do now? Should I create a PR for core? Yes.
gharchive/pull-request
2023-10-22T14:19:11
2025-04-01T06:44:46.582750
{ "authors": [ "eric-wieser", "fgdorais", "negiizhao", "semorrison" ], "repo": "leanprover/std4", "url": "https://github.com/leanprover/std4/pull/314", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
306242869
Permit Unauthenticated Access to Actuator Health Check Endpoint Currently, the Spring Boot Actuator health check endpoint requires authentication to return a status. This endpoint should be accessible without authentication. The native functionality of this endpoint will only return sensitive information if the request is authenticated. If accessed in an unauthenticated context, the endpoint simply returns an up or down status indicator. Code complete.
gharchive/issue
2018-03-18T14:02:08
2025-04-01T06:44:46.584209
{ "authors": [ "mwarman" ], "repo": "leanstacks/skeleton-ws-spring-boot", "url": "https://github.com/leanstacks/skeleton-ws-spring-boot/issues/41", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1964849174
Update README.md - Fix one grammar mistake. I have noticed one grammar mistake in README.md file inside section "Running locally" "your GitHub account and guide your through.." should be "your GitHub account and guide you through.." Below I have attached a screenshot of it. Please review the PR. Please review my PR.
gharchive/pull-request
2023-10-27T06:24:51
2025-04-01T06:44:46.586862
{ "authors": [ "ayushrakesh" ], "repo": "leap-ai/headshots-starter", "url": "https://github.com/leap-ai/headshots-starter/pull/78", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1650872389
📝 Add Word Types Chapter To build locally, please change chapter on line 2 of 03_word_types.mdx to 2. Blocked by #1 for styling Please see the commit descriptions Henceforth I'll be merging without approvals. It's easier to read the finished website and then make a PR anyway.
gharchive/pull-request
2023-04-02T09:15:29
2025-04-01T06:44:46.610115
{ "authors": [ "Benjamin-Piper" ], "repo": "learn-eberban/learn-eberban.github.io", "url": "https://github.com/learn-eberban/learn-eberban.github.io/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1327670235
Massive topic (3519 content items) There's a topic in the Hsoub Academy that contains 3519 content items: https://studio.learningequality.org/en/channels/b431ba9f16a3588b89700f3eb8281af0/#/1ba6a6151c9b429892dfe9460b3f6c77 (don't bother clicking that, it just hangs as Studio tries to load the list of content items from the backend) The source_id for the offending topic is https://academy.hsoub.com/questions/c3-programming/. The source_id of its parent is 'أسئلة وأجوبة'. It looks as if it's a list of programming Q&A entries, each one with a separate HTML5 app in this topic. I don't see tags or other logical ways to divide up the list, but maybe someone should think about how it would be used pedagogically, within Kolibri, first -- without fulltext search, not sure how useful a bunch of uncategorized Q&A nodes would be. I also note that in addition to an infinite spinner on Studio, this also crashes the browser for 0.13 (which is the version still used on the Arabic demo site): http://kolibridemo-ar.learningequality.org/en/learn/#/topics/t/1a3906dedc6d5df1bdd31922df4326ed (but I've been told should maybe be fine with newer versions of Kolibri with infinite scroll pagination)
gharchive/issue
2022-08-03T19:10:56
2025-04-01T06:44:46.652598
{ "authors": [ "jamalex" ], "repo": "learningequality/sushi-chef-hsoub-academy", "url": "https://github.com/learningequality/sushi-chef-hsoub-academy/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2246327128
Cannot connect Ledger in Testnet There is a bug where if you: login to a wallet without ledger change the network to testnet logout you cannot login with ledger as it has persisted the network as testnet You will see this error: Then if you try and sign in with a key you can see its on Testnet: We used to let users change the network from the header but we deprecated this. I think a good fix would be to reset the wallet to mainnet on sign out We should reset the network to network when signing out.
gharchive/issue
2024-04-16T15:22:11
2025-04-01T06:44:46.658985
{ "authors": [ "kyranjamie", "pete-watters" ], "repo": "leather-wallet/extension", "url": "https://github.com/leather-wallet/extension/issues/5239", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1939409614
implementation("com.github.leavesCZY:Matisse:1.0.4")更新不下来 已经增加 maven { url 'https://jitpack.io' } mavenCentral() 其他github都可以更新下来的。 如果确定引用方式没问题的话,你可以试着调整下几个maven地址的先后顺序,看看能不能下载到 如果确定引用方式没问题的话,你可以试着调整下几个maven地址的先后顺序,看看能不能下载到 试过了,还是更新不了。或者在哪能下arr或jar,我直接引用呢,您那有么? 没有这些噢。建议你还是先在这个网址上先检查下各种配置项是否正确: https://jitpack.io/#leavesCZY/Matisse 我看您的构建使用的gradle.kt构建的,我这还是以前的.gradle构建的。需要加什么特殊配置么? 我刚到家让室友试了下,也更新不下来。 源代码的构建方式并不影响你的引用方式。gradle 有的版本差异较大,声明maven地址的方式回头点差别,你把你声明jitpack和引用依赖的地方截下图,我看是否有问题 源代码的构建方式并不影响你的引用方式。gradle 有的版本差异较大,声明maven地址的方式回头点差别,你把你声明jitpack和引用依赖的地方截下图,我看是否有问题 拉取依赖失败时的报错信息是不是都指向了阿里云?有点怀疑在拉取 matisse 时,你的项目只从阿里云仓库里拉,没有尝试从 jitpack 源获取 拉取依赖失败时的报错信息是不是都指向了阿里云?有点怀疑在拉取 matisse 时,你的项目只从阿里云仓库里拉,没有尝试从 jitpack 源获取 拉取依赖失败时的报错信息是不是都指向了阿里云?有点怀疑在拉取 matisse 时,你的项目只从阿里云仓库里拉,没有尝试从 jitpack 源获取 调整过顺序,把maven { url 'https://jitpack.io' }放首位也还是不行。您那边使用正常非kt的gradle构建可以更新下来么? 我确定我这边是可以的。你可以切换为以下方式再试试看,如果还不行的,只能说是你的网络问题了 我确定我这边是可以的。你可以切换为以下方式再试试看,如果还不行的,只能说是你的网络问题了 我擦,就差一个google(),就可以更新下来了。 哈哈哈,看来是阿里云的 Google 源有点问题,导致其它关联依赖你一直拉取不到
gharchive/issue
2023-10-12T07:47:27
2025-04-01T06:44:46.666049
{ "authors": [ "iLucasLiu", "leavesCZY" ], "repo": "leavesCZY/Matisse", "url": "https://github.com/leavesCZY/Matisse/issues/37", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
35217572
Fix more python3 incompatibilities Signed-off-by: Justin Lecher jlec@gentoo.org Please, accept this merge so it can work in Python 3 correctly. Thanks :) Merged, thank you @jlec and @hjalves :)
gharchive/pull-request
2014-06-07T20:27:13
2025-04-01T06:44:46.667573
{ "authors": [ "hjalves", "jlec", "lebinh" ], "repo": "lebinh/ngxtop", "url": "https://github.com/lebinh/ngxtop/pull/41", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1684255372
Build fails with Thrust 2.1: pinned_allocator.h removed pinned_allocator.h was removed as part of pull request https://github.com/NVIDIA/thrust/pull/1611, and a commit referenced from there mentions "Remove thrust::system::cuda::experimental::pinned_allocator.h, which has been deprecated for a long time." I have no idea what it has been deprecated in favour of. An old issue suggests universal_host_pinned_allocator but this doesn't seem to actually exist anywhere. What should it be replaced with? Apparently this is the replacement: #include <thrust/system/cuda/memory.h> using pinned_allocator = thrust::mr::stateless_resource_allocator< T, thrust::system::cuda::universal_host_pinned_memory_resource>; At your suggestion, I've made the following change: diff --git a/src/fft.cu b/src/fft.cu index eeace96..26cd458 100644 --- a/src/fft.cu +++ b/src/fft.cu @@ -44,7 +44,7 @@ #include "ArrayIndexer.cuh" #include <thrust/device_vector.h> #include <thrust/host_vector.h> -#include <thrust/system/cuda/experimental/pinned_allocator.h> +#include <thrust/system/cuda/memory.h> #include <cufft.h> #include <cufftXt.h> @@ -63,9 +63,9 @@ class BFfft_impl { bool _using_load_callback; thrust::device_vector<char> _dv_tmp_storage; thrust::device_vector<CallbackData> _dv_callback_data; - typedef thrust::cuda::experimental::pinned_allocator<CallbackData> pinned_allocator_type; + using pinned_allocator_type = thrust::mr::stateless_resource_allocator<CallbackData, thrust::universal_host_pinned_memory_resource>; thrust::host_vector<CallbackData, pinned_allocator_type> _hv_callback_data; And that builds. However, all FFT-related tests currently fail, specifically those using fftshift, which seems to be exactly where this host_vector is used (?). I can't be certain that's the cause, since this is my first time trying to build bifrost, but seems likely. A little more investigation, and it turns out the entire callback that performs that fftshift isn't running. What's incredible is that if I add an empty print statement to post_fftshift the callback works and is called: diff --git a/src/fft_kernels.cu b/src/fft_kernels.cu index 9aefa89..7ec352c 100644 --- a/src/fft_kernels.cu +++ b/src/fft_kernels.cu @@ -28,6 +28,7 @@ #include "fft_kernels.h" #include "cuda.hpp" +#include "stdio.h" __device__ inline size_t pre_fftshift(size_t offset, @@ -56,6 +57,8 @@ inline Complex post_fftshift(size_t offset, // For forward transforms with apply_fftshift=true, we cyclically shift // the output data by phase-rotating the input data here. if( cb->do_fftshift && !cb->inverse ) { + if (offset == 0) printf(""); + for( int d=0; d<cb->ndim; ++d ) { // Compute the index of this element along dimension d // **TODO: 64-bit indexing support What's more incredible, is that if I add this print statement to the parent function only, in this case callback_load_cf32, it doesn't work and nothing is printed. Only if the print is added to post_fftshift do both print statements print anything at all. I have no idea what's going on here. I've updated the self-hosted runner to Ubuntu 20.04 and CUDA 12.0 and I'm now seeing this in the CI. I'm also getting a 'cuda/stream.hpp(85): error: namespace "cuda::std" has no member "runtime_error"' error there as well. Working through those locally, I get Bifrost to build, and I am seeing that all of the test_fft tests are failing with a lot of zero filled results. I played around with this a little bit and ended up with fewer errors if I changed the declaration of CallbackData in fft_kernels.h to be a struct __attribute__((packed)) CallbackData. I'm not sure why this would matter but I now only get errors on the complex-to-real transform tests. I think my complex-to-real errors are from an older version of the test suite (I've been testing on "ibverb-support"). As of https://github.com/ledatelescope/bifrost/commit/abee49a98094143d90cd146427822ac893ee3d2f CI looks to be ok. I'm also getting a 'cuda/stream.hpp(85): error: namespace "cuda::std" has no member "runtime_error"' error there as well Yes, I got that too and had to make it an absolute import. I played around with this a little bit and ended up with fewer errors if I changed the declaration of CallbackData in fft_kernels.h to be a struct __attribute__((packed)) CallbackData. I'm not sure why this would matter but I now only get errors on the complex-to-real transform tests. I can confirm this works for me too, however the compiler complains: fft_kernels.h:109:13: warning: ignoring packed attribute because of unpacked non-POD field ‘int_fastdiv CallbackData::istrides [3]’ 109 | int_fastdiv istrides[3]; // Note: Elements, not bytes ...so I'm not sure why it works, especially since the compiler is telling me it's being ignored (!).
gharchive/issue
2023-04-26T04:38:03
2025-04-01T06:44:46.678484
{ "authors": [ "benbarsdell", "jaycedowell", "torrance" ], "repo": "ledatelescope/bifrost", "url": "https://github.com/ledatelescope/bifrost/issues/202", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2083056568
Test Error @leekwoon 안녕하세요 최근에 강화학습과 자율주행에 관심이 있어 서칭하다 한국분이시고, 논문을 읽다 흥미가 생겨서 올려주신대로 시도를 하고 있는 상황입니다. 그러다 Test 과정에서 error 가 발생했습니다. python -m test --scenario atc --low_level_agent_params /tmp/hrl_nav_logs/NavGymRewardRandomize-v0/hrl/2024_01_12_23_31_02/seed_0/low_level/itr_120.pkl --high_level_agent_params /tmp/hrl_nav_logs/NavGymHierarchical-v0/hrl/2024_01_15_13_51_27/seed_0/high_level/itr_80.pkl --spec atc 에서 -- scenario atc 를 했을 때 이런 오류가 발생하게 됩니다. -- scenario atc를 -- scenario corridor map으로 바꿔 설정하면 보시다시피 맵이 제대로 생성되지 못한 상태로 test 가 됩니다. 그 외에 -- scenario None 에서는 맵 변경이나 Test가 잘 되었으나-- scenario building역시 1. 처럼 문제가 발생합니다. 그래서 문제를 해결하는 과정에서 문제를 발견했습니다. 여기 를 보시면 map 을 불러오는 라이브러리에서 문제가 발생하는 것 같습니다. 이런 경우 어떻게 수정을해야하고 전반적인 문제를 어떻게 해결해야 할 지 잘 모르겠습니다. 이럴 경우 어떻게 해결해야 할 지 수고스러우시겠지만 답변 부탁드리겠습니다. 새해 복 많이 받으시고 행복한 하루 되시길 바라겠습니다! segmentation fault라 정확한 문제를 파악하기 어렵네요. 혹시 아래 링크에 있는 코드베이스를 한번 테스트 해보시겠어요? 링크: https://drive.google.com/file/d/1rHwJ404TlBZLyXeUw3Sn_NaWmH8Ez5KD/view?usp=sharing 저는 아래 명령어를 수행하면 해당 결과를 얻을 수 있엇습니다. python -m examples.test --scenario atc # 또는 python -m examples.test --scenario corridor closing this issue due to inactivity.
gharchive/issue
2024-01-16T05:39:54
2025-04-01T06:44:46.747645
{ "authors": [ "gang-hyun", "leekwoon" ], "repo": "leekwoon/hrl-nav", "url": "https://github.com/leekwoon/hrl-nav/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2338394744
Multi monitor support Hi, I experience issues clicking on elements which are not on the primary screen. I do element.click(), the cursor starts to move but stops at the edge of the primary monitor. I'm going to build a small reproducer and debug the issue further soon, just wanted to let you know that there might be an issue and hear if you are aware of anything. Paul This crate has not been tested in a multi-screen scenario yet, and I will also try to verify this issue. fixed on v0.11.3 Tested it and work now, thanks for the fix :)
gharchive/issue
2024-06-06T14:22:38
2025-04-01T06:44:46.764727
{ "authors": [ "elsnerpaul", "leexgone" ], "repo": "leexgone/uiautomation-rs", "url": "https://github.com/leexgone/uiautomation-rs/issues/62", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
798946226
🛑 Miss Evangeline is down In c556f8f, Miss Evangeline (https://www.miss-evangeline.de) was down: HTTP code: 0 Response time: 0 ms Resolved: Miss Evangeline is back up in 6bcf99d.
gharchive/issue
2021-02-02T05:35:23
2025-04-01T06:44:46.797116
{ "authors": [ "lefuex" ], "repo": "lefuex/upptime", "url": "https://github.com/lefuex/upptime/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2592731769
feat: new event to send emails to winners when the ranking finishes and fix in the organization of events Changes new event to send emails to winners when the ranking finishes fix in the organization of events docs docs are in the respective files https://legendaryum.atlassian.net/jira/software/projects/LE/boards/1?assignee=712020%3A71750ee7-8648-475c-99c7-c39c60a8e3fc&selectedIssue=LE-2582
gharchive/pull-request
2024-10-16T18:38:57
2025-04-01T06:44:46.812489
{ "authors": [ "PatricioPoncini" ], "repo": "legendaryum-metaverse/rust-library", "url": "https://github.com/legendaryum-metaverse/rust-library/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
427117679
Fixes for compiler warnings. Fixed some warnings that would be triggered in downstream packages including header files from this repo when building with the '-Wall -Wextra -Wpedantic' compiler flags. @goodfaiter Can you merge? I lack the privileges...
gharchive/pull-request
2019-03-29T18:14:32
2025-04-01T06:44:46.824985
{ "authors": [ "yvaind" ], "repo": "leggedrobotics/free_gait", "url": "https://github.com/leggedrobotics/free_gait/pull/71", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1621962735
Should lis-paginated-search-mixin.ts be used without a form? Should (can?) https://github.com/legumeinfo/web-components/blob/main/src/mixins/lis-paginated-search-mixin.ts be used without a <form> tag? If so, should we change the protected function renderForm() to something like renderContent()? Can you give more details about your use case? Depending on what you're trying to achieve it might make sense to not use the mixin or make a new mixin. Andrew's linkout microservice is wrapped in a js function that retrieves data and returns it formatted in a very similar way to the existing gene search. There are two attributes, one is a name that is converted into a hyperlink using the url attribute from the microservice and the other is a description returned directly from the microservice. This is fed an instance of <lis-simple-table-element> which is slotted in an <lis-modal-element>. The use case still isn't too clear to me. If you want to get rid of the form that initiates a search then how is a search going to be performed from the web component's perspective? Also, is Andrew's linkout service paginated? And how are you imagining the linkout web component will be used? I know you intend to put it in a modal; will a new modal and linkout component be created every time a link that activates the modal is clicked or will you just hard-code them in the page and reuse them? In general I don't think using the paginated search mixin without an actual form element is going to be easy. The mixin is tightly coupled with the form so bypassing it altogether would require a pretty hacky solution. Probably the easiest way to get this to work would be to actually use a form but make it hidden and submit it programmatically. This wouldn't required any modifications to the mixin or hacks in the implementing class. However, implementing and managing a hidden form is still a level of misdirection. So unless the linkout use case is basically identical to the paginated search except without the form, I think it's probably just worth implementing the functionality via a new component. Reimplementing features from the paginated search is a good idea, though. Having all the elements automatically update when a search is performed and completed is definitely worth encapsulating in a component. I think that really the mixin shouldn't be used without a form after fully digesting your response and learning a bit more about this. Closing
gharchive/issue
2023-03-13T17:45:33
2025-04-01T06:44:46.840676
{ "authors": [ "alancleary", "ctcncgr" ], "repo": "legumeinfo/web-components", "url": "https://github.com/legumeinfo/web-components/issues/46", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1204000501
🛑 SDP-dev is down In 06cb44e, SDP-dev (https://dev-spd-cluster.splashshield.ai/login) was down: HTTP code: 502 Response time: 3129 ms Resolved: SDP-dev is back up in 0932321.
gharchive/issue
2022-04-14T03:37:40
2025-04-01T06:44:46.845767
{ "authors": [ "lei-splashtop" ], "repo": "lei-splashtop/sep-uptime", "url": "https://github.com/lei-splashtop/sep-uptime/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
119790328
possible bug in date calculations? I have a $startDate variable which I enter in a dialog as Jan 11 2016 and then I have due dates for items set as: due: $dateOfStart+1w+3d 5pm and it makes them due on Friday, not Thursday. Since Jan 11 2016 is a Monday I would expect that +1w+3d to be Thursday. So it's off by one day or am I doing something wrong and I should be using +1w+2d 5PM to get Thursday? Yeah, the script doesn't support adding dates and setting a time in the same date. The reason it goes to Friday is that the addition above resolves to +1w 2d 17h (5PM = 17 hours from midnight). This obviously isn't a great way to handle it, but unfortunately, I don't have much time these days to work on the date math. Sorry for the inconvenience! No problem. I can easily work around it by doing one less day. If you're not going to fix it, I'll just do that but I didn't want to make all those changes if you were going to fix it soon. My AppleScript knowledge is minimal so I won't attempt to fix it either. I'll let you choose whether to leave the issue open or closed.
gharchive/issue
2015-12-01T19:34:05
2025-04-01T06:44:46.905719
{ "authors": [ "dave256", "lemonmade" ], "repo": "lemonmade/templates", "url": "https://github.com/lemonmade/templates/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2511640521
🛑 k8slens.dev is down In 50bb529, k8slens.dev (https://k8slens.dev) was down: HTTP code: 500 Response time: 482 ms Resolved: k8slens.dev is back up in 4b1e44b after 11 minutes.
gharchive/issue
2024-09-07T13:10:37
2025-04-01T06:44:46.952901
{ "authors": [ "lens-cloud" ], "repo": "lensapp/k8slens-status", "url": "https://github.com/lensapp/k8slens-status/issues/306", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1561997587
Allow extension to specify storeName If extension's package.json specifies storeName property which is set to a truthy value, use it for persisting data (e.g. extension stores and extension data) instead of the extension name. This allows extension to be renamed without losing data. Note that extension can set the storeName also in an injectable. It is the opinion of the core team that using a field within the package.json would be preferable.
gharchive/issue
2023-01-30T08:52:35
2025-04-01T06:44:46.954435
{ "authors": [ "Nokel81", "panuhorsmalahti" ], "repo": "lensapp/lens", "url": "https://github.com/lensapp/lens/issues/7052", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
889789149
[Question] Running fvm flutter --version reports unknown channel I have setup FVM using the instructions in the documentation. The contents of fvm_config.json file are: { "flutterSdkVersion": "2.0.6", "flavors": {} } Running fvm flutter --version gives the following output: ➜ fvm flutter --version Flutter 2.0.6 • channel unknown • unknown source Framework • revision 1d9032c7e1 (12 days ago) • 2021-04-29 17:37:58 -0700 Engine • revision 05e680e202 Tools • Dart 2.12.3 Notice that the channel/source are reported as unknown. If I install 2.0.6 version without using fvm the output gives the channel and source correctly. @abhaysood this question comes up every so often and I believe there has been change in behavior, which it does not impact. but if you don’t kind me asking how are you is taking the standalone version 2.0.6? I simply ran fvm install 2.0.6: ~ fvm install 2.0.6 fvm global Flutter 2.0.6 • channel unknown • unknown source Framework • revision 1d9032c7e1 (2 weeks ago) • 2021-04-29 17:37:58 -0700 Engine • revision 05e680e202 Tools • Dart 2.12.3 @abhaysood you mentioned that it’s different if you install 2.0.6 without fvm, how did you do that? flutter upgrade 2.0.6 Flutter is already up to date on channel stable Flutter 2.0.6 • channel stable • https://github.com/flutter/flutter.git Framework • revision 1d9032c7e1 (2 weeks ago) • 2021-04-29 17:37:58 -0700 Engine • revision 05e680e202 Tools • Dart 2.12.3 Ok, that is why… you are upgrading the channel. Fvm pulls the tag version directly. Also flutter versions can repeat across channels, so the information is correct.
gharchive/issue
2021-05-12T07:55:13
2025-04-01T06:44:46.982234
{ "authors": [ "abhaysood", "leoafarias" ], "repo": "leoafarias/fvm", "url": "https://github.com/leoafarias/fvm/issues/291", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
172753069
Configure terminal to use English Para evitar problemas com pontos e com virgulas. O terminal tem que ser em ingles. Configuração atual na maquina de casa, suponho que seja igual nas GAPHL » locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_CTYPE=pt_BR.UTF-8 LC_NUMERIC=pt_BR.UTF-8 LC_TIME=pt_BR.UTF-8 LC_COLLATE="en_US.UTF-8" LC_MONETARY=pt_BR.UTF-8 LC_MESSAGES="en_US.UTF-8" LC_PAPER=pt_BR.UTF-8 LC_NAME=pt_BR.UTF-8 LC_ADDRESS=pt_BR.UTF-8 LC_TELEPHONE=pt_BR.UTF-8 LC_MEASUREMENT=pt_BR.UTF-8 LC_IDENTIFICATION=pt_BR.UTF-8 LC_ALL= Na náquina gaphl12 LANGUAGE= estava em branco É possível mudar o terminal para usar inglês como idioma fazendo: LC_NUMERIC=en_US.UTF-8 Como posso deixar isso padrão na imagem? Some definitions from oracle LC_CTYPE ---- Character classification and case conversion LC_COLLATE -- Collation order LC_MONETARY - Monetary formatting LC_NUMERIC -- Numeric, non-monetary formatting LC_TIME ----- Date and time formats LC_MESSAGES - Formats of informative and diagnostic messages and interactive responses
gharchive/issue
2016-08-23T17:01:17
2025-04-01T06:44:46.984904
{ "authors": [ "leoheck" ], "repo": "leoheck/gaph-host-config", "url": "https://github.com/leoheck/gaph-host-config/issues/25", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1644904712
導航收藏頁 是否有更多的彈性 一般dashboard有更視覺化的顯示 用圖示 尤其是自定義圖示 會讓使用者更能一目了然想要點擊或尋找的物件 我稍微試用了一下 發現只能選擇URL和說明文字 但一顆小小的cap其實塞不下remark 不知道要怎麼修改讓它看起來更美觀 或者更自由 也不希望受限在對應鍵盤的layout, 因為有時候放在桌面或書籤的連結包含了很多同個domain的相近網站 這樣不太好硬放在鍵盤對應 先尝试增加这个值。
gharchive/issue
2023-03-29T01:52:17
2025-04-01T06:44:46.990306
{ "authors": [ "KamiyaMinoru", "leon-kfd" ], "repo": "leon-kfd/Dashboard", "url": "https://github.com/leon-kfd/Dashboard/issues/93", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1793753533
Crash with create 0.5.1c+ Mixin crash occur with: mod version minecraft 1.19.2 forge 1.19.2 - 43.2.14 create 0.5.1c+ irisfw 0.2.0 oculus 1.6.4 Other minecraft version untested but a crash can be expected. The crash is caused due to failed mixin. The original method have moved due to flywheel upgrade in create 0.5.1c. The new method is refactored to ModelUtil#getBufferedData(Bufferable). This caused existing mixin to fail. Related stack trace: Caused by: org.spongepowered.asm.mixin.throwables.MixinApplyError: Mixin [irisflw.mixins.flw.json:MixinModelUtil] from phase [DEFAULT] in config [irisflw.mixins.flw.json] FAILED during APPLY at org.spongepowered.asm.mixin.transformer.MixinProcessor.handleMixinError(MixinProcessor.java:636) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinProcessor.handleMixinApplyError(MixinProcessor.java:588) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinProcessor.applyMixins(MixinProcessor.java:379) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} ... 76 more Caused by: org.spongepowered.asm.mixin.injection.throwables.InvalidInjectionException: Critical injection failure: Callback group @Group(name=getBufferBuilderHead, min=1, max=2) in irisflw.mixins.flw.json:MixinModelUtil failed injection check: expected 1 invocation(s) but 0 succeeded [ -> PostApply Phase -> irisflw.mixins.flw.json:MixinModelUtil] at org.spongepowered.asm.mixin.transformer.MixinTargetContext.postApply(MixinTargetContext.java:1262) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinApplicatorStandard.apply(MixinApplicatorStandard.java:344) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.TargetClassContext.apply(TargetClassContext.java:383) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.TargetClassContext.applyMixins(TargetClassContext.java:365) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinProcessor.applyMixins(MixinProcessor.java:363) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} ... 76 more Caused by: org.spongepowered.asm.mixin.injection.throwables.InjectionValidationException: expected 1 invocation(s) but 0 succeeded at org.spongepowered.asm.mixin.injection.struct.InjectorGroupInfo.validate(InjectorGroupInfo.java:268) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.injection.struct.InjectorGroupInfo$Map.validateAll(InjectorGroupInfo.java:126) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinTargetContext.postApply(MixinTargetContext.java:1255) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinApplicatorStandard.apply(MixinApplicatorStandard.java:344) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.TargetClassContext.apply(TargetClassContext.java:383) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.TargetClassContext.applyMixins(TargetClassContext.java:365) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} at org.spongepowered.asm.mixin.transformer.MixinProcessor.applyMixins(MixinProcessor.java:363) ~[mixin-0.8.5.jar:0.8.5+Jenkins-b310.git-155314e6e91465dad727e621a569906a410cd6f4] {} ... 76 more Thanks!
gharchive/issue
2023-07-07T15:38:08
2025-04-01T06:44:46.995433
{ "authors": [ "leon-o", "n-507" ], "repo": "leon-o/iris-flw-compat", "url": "https://github.com/leon-o/iris-flw-compat/issues/55", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
898077666
Unneeded calls to F1 TV API for future weekends The app calls "/2.0/R/%s/BIG_SCREEN_HLS/ALL/PAGE/SANDWICH/F1_TV_Pro_Monthly/$GROUP_ID?meetingId=%s&title=weekend-sessions" for each weekend, but all future weekends return a 404. This is taking time and adds undesired load on the API. This can easily be fixed to only get the sessions for weekends for which the start time is in the past. @leonardoxh this one is implemented with #67 and can be closed as well.
gharchive/issue
2021-05-21T14:06:22
2025-04-01T06:44:47.003330
{ "authors": [ "bashopman" ], "repo": "leonardoxh/race-control-tv", "url": "https://github.com/leonardoxh/race-control-tv/issues/72", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
639106496
add CLI progress indicators for deploy command added in tag 6.6.18 to UI, need to git into CLI deploy command, too @crstauf do you want to have a go at this for one of the deploy methods? (besides zip or netlify, as those don't really progress, just 1 zip file is built) you'll see in the UI that it's just using a "files remaining to download". My brain's a bit fried from looking at the deployer codes, but you may have clearer mind. There's some more complexity in GitLab and I think Bitbucket, due to them doing a multi-stage process to batch deploys. I'm happy with that for this release, but in future, we can add a new DB table DeployLog similar to CrawlLog, in order to track original total of deployable files to use a % indicator, along with tracking which ones are DeployCache hits, which is similar to what https://github.com/WP2Static/wp2static is going. No pressure/obligation, I can pick this up later @leonstafford Sure, I can take a look tonight and see if I can wrap my brain around them enough to implement progress bar. @leonstafford I got started with the PR (with BitBucket), but (and I can't believe I didn't realize this earlier) I won't be able to actually perform the deployments to test, because I don't have accounts with all the deployment methods, so I'll need your assistance to actually run the command. @crstauf haha, no worries - it can be a pain to set them all up, I'm afraid to blow away my main local dev instance for that reason! Just gave BitBucket a shot - it jumped from 20% (bootstrapping) to 100% and then hung there until complete. BitBucket and I think GitLab will always be a bit jumpy, as can mark the files complete only when the batch is successful, so good to test with small batch sizes to confirm behaviour is as expected. @leonstafford I've not yet implemented progress for the batches, so hopefully that'll be completed soon. I definitely expected it to stall for awhile, but shouldn't have been on 100%. I'll work on that. @leonstafford FYI I'm not going to get back to this unfortunately.
gharchive/issue
2020-06-15T19:56:22
2025-04-01T06:44:47.037237
{ "authors": [ "crstauf", "leonstafford" ], "repo": "leonstafford/static-html-output", "url": "https://github.com/leonstafford/static-html-output/issues/101", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
218788424
add error reporting tools for easier communication of errors by users / shorten feedback loop error submission form on plugin page standalone script able to be triggered most users may not bother to signup/login to WP.org to report issue or open email client closing, duplicate of https://github.com/leonstafford/wordpress-static-html-plugin/issues/91
gharchive/issue
2017-04-02T19:46:56
2025-04-01T06:44:47.039465
{ "authors": [ "leonstafford" ], "repo": "leonstafford/wordpress-static-html-plugin", "url": "https://github.com/leonstafford/wordpress-static-html-plugin/issues/18", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
2125494239
Local deployment failed My steps : 1. git clone https://github.com/leptonai/search_with_lepton.git 2. cd search_with_lepton 3. pip install -U leptonai 4. cd web && npm install && npm run build 5. npm run start Then I got > search@0.1.0 start > next start ▲ Next.js 14.0.4 - Local: http://localhost:3000 [Error: ENOENT: no such file or directory, open '/home/search_with_lepton/ui/BUILD_ID'] { errno: -2, code: 'ENOENT', syscall: 'open', path: '/home/search_with_lepton/ui/BUILD_ID' } alright, I try to run BACKEND=GOOGLE python search_with_lepton.py and it works!
gharchive/issue
2024-02-08T15:45:21
2025-04-01T06:44:47.051529
{ "authors": [ "seanxuu" ], "repo": "leptonai/search_with_lepton", "url": "https://github.com/leptonai/search_with_lepton/issues/55", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1975242934
possible missing #[allow(unused_mut)] in component macro https://github.com/leptos-rs/leptos/blob/3adfd334df6f4116f1a043d97e9bb33b636edf86/leptos_macro/src/component.rs#L501C33-L501C33 Issue Recently I have been getting compiler warnings above one of my components. I do not have steps to reproduce. Maybe solution I could very well be wrong about this as I am not intimately familiar with the proc sides of Leptos. I believe there are some very niche scenarios where this mut self could throw a warning. Opinions It likely does not matters at all. On the contrary, one #[allow()] is not going to kill compile time. Could just be a desync between the proc-macro server and rust-analyzer. Regardless, let me know if you would want one added there for safe measure, or don't want to worry about it. It's certainly possible that the line in question is the issue. If you can provide an example of some code that produces the warning erroneously, I'd be happy to try to fix it. Otherwise, there's not really any way to know what the issue is, what's causing it, or whether the fix fixes it.
gharchive/issue
2023-11-03T00:22:09
2025-04-01T06:44:47.054502
{ "authors": [ "ChristopherPerry6060", "gbj" ], "repo": "leptos-rs/leptos", "url": "https://github.com/leptos-rs/leptos/issues/1977", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1853429836
Math Magicians: Testing Hi, We did with my coding partner @As1imwe-Mark the following task: [ ] We create the testing files for the project: operate.test.js and calculate.test.js. [ ] We wrote the code for the tests. [ ] We run linters. [ ] App is working as we expected. Hi @Team👋🏼👋🏼👋🏼👋🏼, Good job so far! There are some issues that you still need to work on to go to the next project but you are almost there! Highlights✅✅✅ [x] Tests cases have been writen for all logic in operate.js and calculate.js👍🏼. [x] Written tests are passing👍🏼. [x] Linters are passing👍🏼. Required Changes ♻️ [ ] Nice job so far👍🏼, It is a requirement in this milestone that unit tests are created for all components😉. Kindly do well to implement that as you have only created tests for the logic😉. _Check the comments under the review._ Optional suggestions Every comment with the [OPTIONAL] prefix is not crucial enough to stop the approval of this PR. However, I strongly recommend you to take them into account as they can make your code better. Cheers and Happy coding!👏👏👏 Feel free to leave any questions or comments in the PR thread if something is not 100% clear. Please, remember to tag me in your question so I can receive the notification. Please, do not open a new Pull Request for re-reviews. You should use the same Pull Request submitted for the first review, either valid or invalid unless it is requested otherwise. As described in the Code reviews limits policy you have a limited number of reviews per project (check the exact number in your Dashboard). If you think that the code review was not fair, you can request a second opinion using this form. Hi, Thank you for notice that, my apologies for the confusion, I updated the code so the components has their test files: Also, I runned Linters asswell
gharchive/pull-request
2023-08-16T15:03:56
2025-04-01T06:44:47.061600
{ "authors": [ "lerfast" ], "repo": "lerfast/math-magicians", "url": "https://github.com/lerfast/math-magicians/pull/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
935657611
Can cargo run, can't cargo build Hi @lerouxrgd, When using cargo run and cargo run --release, I can use the ngt-rs crate without any issues. Search works and it's really fast. However, when I use cargo build --release and run the binary, I get the following error: error while loading shared libraries: libngt.so.1: cannot open shared object file: No such file or directory I'm not sure if this is an issue with ngt-rs, an issue with the underlying NGT, or an issue with my setup. Do you have any thoughts on this? Ubuntu 18.04, ngt = "0.4.0" The only instances of libngt.so.1 on my machine are build artifacts: project_root/target/release/build/ngt-sys-7e531d7ff98f0bd6/out/lib/libngt.so.1 project_root/target/release/build/ngt-sys-7e531d7ff98f0bd6/out/build/lib/NGT/libngt.so.1 So I think this is an issue of ngt-rs somehow not using the right libs when building? Yes this is expected, the binary built with cargo build does not contain the shared library libngt.so.1. However libngt.so.1 has been built along with your regular binary, you can find it with something like: find target/release -regex .*out/lib/libngt.* For the final binary to work properly this libngt.so.1 shared library has to be in LD_LIBRARY_PATH. So something like this should work: LD_LIBRARY_PATH=/path/to/libngt.so.1 your_binary Usually shared libraries should be installed in /usr/lib, but it depends on your machine setup. Another way to install it on your machine could be to use cargo install --path . as I assume it would copy libngt.so.1 to the appropriate location on your system (I haven't tried though). So in your case you can try: LD_LIBRARY_PATH=project_root/target/release/build/ngt-sys-7e531d7ff98f0bd6/out/lib/ project_root Assuming you are running from project_root and your binary is called project_root It does work this way, thanks for the prompt reply!
gharchive/issue
2021-07-02T10:54:24
2025-04-01T06:44:47.090049
{ "authors": [ "lerouxrgd", "paulbricman" ], "repo": "lerouxrgd/ngt-rs", "url": "https://github.com/lerouxrgd/ngt-rs/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
238403855
Use pre-instantiated ByteBufProcessor in RedisStateMachine to find EOL ByteBuf.indexOf(…) allocates a processor instance to find the matching byte. Reusing an instance prevents object allocation. Benchmark: Before: RedisStateMachineBenchmark.measureDecode avgt 5 192,823 ± 9,801 ns/op After: RedisStateMachineBenchmark.measureDecode avgt 5 174,228 ± 21,088 ns/op
gharchive/issue
2017-06-25T20:24:27
2025-04-01T06:44:47.151577
{ "authors": [ "mp911de" ], "repo": "lettuce-io/lettuce-core", "url": "https://github.com/lettuce-io/lettuce-core/issues/557", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
132601053
'Address to use' is empty in Mist Using Mist 0.3.6 on OS X 10.10.2 I've shared two identities w/ etherid.org, but when I go to claim a name, none are available. Adding etherid.org to the list of dapps on the left and then re-adding an identity worked Mist is not yet released. The available version is quite old. Once the program released officially, I will take a look.
gharchive/issue
2016-02-10T03:57:19
2025-04-01T06:44:47.173309
{ "authors": [ "RevCBH", "lexansoft" ], "repo": "lexansoft/etherid.org", "url": "https://github.com/lexansoft/etherid.org/issues/6", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1170961748
Build docs only on merge to master (not for every PR). Solves #374 Codecov Report Merging #426 (98642a1) into main (232ff3d) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #426 +/- ## ======================================= Coverage 88.18% 88.18% ======================================= Files 15 15 Lines 3065 3065 ======================================= Hits 2703 2703 Misses 362 362 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 232ff3d...98642a1. Read the comment docs.
gharchive/pull-request
2022-03-16T12:32:26
2025-04-01T06:44:47.181943
{ "authors": [ "codecov-commenter", "wuhu" ], "repo": "lf1-io/padl", "url": "https://github.com/lf1-io/padl/pull/426", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
283348902
Text not updating when modifying en.default.json Hello, I have noticed that when modifying en.default.json text the modification does'nt apply to the UI after when tns refesh the app, the before-prepare and after-prepare hooks are firing correctly. To get the updated text i have to run tns platform clean android then run tns run android. Executing before-prepare hook from G:\Users\Adil\Documents\Projects\FCash\front-end\fcash-app\hooks\before-prepare\nativescript-dev-android-snapshot.js Executing before-prepare hook from G:\Users\Adil\Documents\Projects\FCash\front-end\fcash-app\hooks\before-prepare\nativescript-dev-typescript.js Executing before-prepare hook from G:\Users\Adil\Documents\Projects\FCash\front-end\fcash-app\hooks\before-prepare\nativescript-localize.js Preparing project... Project successfully prepared (Android) Executing after-prepare hook from G:\Users\Adil\Documents\Projects\FCash\front-end\fcash-app\hooks\after-prepare\nativescript-dev-android-snapshot.js Successfully transferred en.default.json. Refreshing application... Successfully synced application org.nativescript.Groceries on device 17a7d2ad. Skipping prepare. ActivityManager: Start proc 13664:org.nativescript.Groceries/u0a139 for activity org.nativescript.Groceries/com.tns.NativeScriptActivity JS: Angular is running in the development mode. Call enableProdMode() to enable the production mode. Successfully transferred strings.xml. Refreshing application... Successfully synced application org.nativescript.Groceries on device 17a7d2ad. Hi, I noticed this problem too, I created the issue NativeScript/nativescript-cli#3251 as I think it is linked to the way nativescript-cli handles changes. A change should trigger a new build as application resources changes. For now what you do is a good workaround. Regards. I'll keep this issue opened since this is still a problem
gharchive/issue
2017-12-19T19:49:21
2025-04-01T06:44:47.187383
{ "authors": [ "adil-boukdair", "lfabreges" ], "repo": "lfabreges/nativescript-localize", "url": "https://github.com/lfabreges/nativescript-localize/issues/15", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1838720522
date时间格式时区问题 date时间格式会有时区问题,代码是全局默认配置。能否支持全局配置时区和默认时间序列化格式。 目前LocalDateTime支持格式 时间戳 yyyy-MM-dd'T'HH:mm:ss yyyy-MM-dd HH:mm:ss yyyy-MM-dd 只有时间戳会使用系统默认时区的,其他情况不涉及时区概念
gharchive/issue
2023-08-07T05:41:42
2025-04-01T06:44:47.223561
{ "authors": [ "clickear", "lgp547" ], "repo": "lgp547/any-door", "url": "https://github.com/lgp547/any-door/issues/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2498792832
TOPページの実装 概要 現在、TOPページの各コンポーネント、編集用モーダルが、別イシューにて作業されています! 上記が完了したら、これらを連携させ、TOPページを完成させます。 タスク [ ] メインセクションの実装 [ ] 編集用モーダルの実装&連携 [ ] TOPページに、GitHubからコントリビュートできる「cta セクション(もしくは、contribute-offer セクション)」の作成 その他チェック: [ ] 不要なコメントアウト、ダミーデータの削除 TOPページの処理について https://github.com/lgtm-factory/lgtm-factory/discussions/99 #167 にて、マージ済みのPRにレビューしてしまったので、こちらのイシューでレビューしたCSSの部分もリファクタリングできれば、と思います! 以下、修正希望箇所です🫶 https://github.com/lgtm-factory/lgtm-factory/pull/167#pullrequestreview-2274227338 ちょっと変更点多すぎてレビューが大変になると思うので、イシュー分割します!(すでに大変かも……😭すみません💦) メモ:引き継ぎタスク [ ] EditAreaコンポーネント(モーダルの編集部分)の機能実装 [ ] CTAセクションの作成 [ ] ImageInfoModalコンポーネント(モーダル)の、ボタンの色や背景の色、オーバーレイの色などをサイトイメージに合わせて調整する [ ] baseUrlの環境による出し分け 出し分けをしてないため、現在開発環境ではDLボタンが効きません
gharchive/issue
2024-08-31T11:09:53
2025-04-01T06:44:47.230956
{ "authors": [ "kagomen", "kazzyfrog", "siso001" ], "repo": "lgtm-factory/lgtm-factory", "url": "https://github.com/lgtm-factory/lgtm-factory/issues/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
343494741
Pre-built docker image with all the dependency intalled Install docker on any 64-bit Linux host OS: wget -qO- https://get.docker.com/ | sh sudo usermod -aG docker $USER sudo systemctl enable docker sudo systemctl restart docker # Reboot to make sure the Unix group membership in /etc/groups is configured for new logins Pull the image: docker pull daocloud.io/liuqun1986/tensorflow-on-arm Start a container for cross-building: docker run -it -v /root/userconfigs:/root/userconfigs -v /tmp/tensorflow_pkg:/tmp/tensorflow_pkg \ daocloud.io/liuqun1986/tensorflow-on-arm:latest Inside the container: cd /root ./build_tensorflow.sh /root/configs/rpi.conf The docker image were built using an online cloud service provider from China called DaoCloud. I've set up an personal open source project at https://dashboard.daocloud.io/packages/5f8ba788-21e8-4308-90c5-3025e1fb0190 Dockerfile is hosted in github: https://github.com/liuqun/tensorflow-on-arm/blob/docker-without-travis/build_tensorflow/Dockerfile
gharchive/issue
2018-07-23T05:43:53
2025-04-01T06:44:47.243183
{ "authors": [ "liuqun" ], "repo": "lhelontra/tensorflow-on-arm", "url": "https://github.com/lhelontra/tensorflow-on-arm/issues/22", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
954658311
大图浏览弹窗设置onSrcViewUpdate方法后滑动会卡顿 设置了这个更新监听onSrcViewUpdate,切换图片浏览时会卡顿,不设置就不会。但是如果不设置,返回的时候不是开始的照片时,共享元素有点奇怪 官方demo不会卡顿,估计是图片太小像素低这些,我看了demo的源码实现不会出现这样的情况。 gif图片可能不明显,但是手机上很明显。 @9292922 你是图片都是不压缩的?是什么使用场景 不压缩的(读取的是本地图片的uri),就是用于大图浏览呀,因为很方便。如果没有设置onSrcViewUpdate,imagepopup速度是没问题的,就是设置了就会卡顿 ---原始邮件--- 发件人: @.> 发送时间: 2021年8月28日(周六) 晚上6:01 收件人: @.>; 抄送: @.@.>; 主题: Re: [li-xiaojun/XPopup] 大图浏览弹窗设置onSrcViewUpdate方法后滑动会卡顿 (#770) @9292922 你是图片都是不压缩的?是什么使用场景 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 不压缩卡顿就对了,耗时主要在decode上面。
gharchive/issue
2021-07-28T09:26:17
2025-04-01T06:44:47.262964
{ "authors": [ "9292922", "li-xiaojun" ], "repo": "li-xiaojun/XPopup", "url": "https://github.com/li-xiaojun/XPopup/issues/770", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2257917513
Optimize and add device images Reduce images folder size from 2.84 MB to 1.41 MB using PNG Optimize for web and resizing. Reduced all images to max size of 500 x 500. Add photos for Private, Unknown, DIY. Icons from Wikimedia. Add Icon for Heltec HT62 (Chip module) Image from Heltec's website. Let me know if there would be a better icon for "DIY". Add Image for Heltec Wireless Tracker V1.0 since it looks fairly similar to V1.1. Thanks! I did actually run the images through tinypng to optimise, but I didn't cap the size, so that helps a bit. Personally not a fan of the wikimedia icons. Let's remove those. I don't think they help out at all when showing in the map UI, just takes up space without really showing you what the hardware is. ✅ Reduce images folder size from 2.84 MB to 1.41 MB using PNG Optimize for web and resizing. ✅ Reduced all images to max size of 500 x 500. ❌ Add photos for Private, Unknown, DIY. Icons from Wikimedia. ✅ Add Icon for Heltec HT62 (Chip module) Image from Heltec's website. ✅ Add Image for Heltec Wireless Tracker V1.0 since it looks fairly similar to V1.1. Let me know if there would be a better icon for "DIY". I think it's fine to have no image for now. If you can remove those wikimedia images, I'll merge in. Map UI Device List Looks like this one actually increased in size lol. But no problem... Ok thanks. I didn't realize the images would show so large. I will create a issue to add error checking. Currently, the site requests 2 dozen images that don't exist. Maybe there can be a check before it does its GET request? Thanks. OK, I added the HELTEC_HT62 image, thinking that there were many of those devices, since a few days ago it showed up high in the list of devices. Now, there are only 2, and also the image isn't showing up. I wonder what happened? Those were probably the nodes I deleted from the map for spam. Discord thread for reference: https://discord.com/channels/867578229534359593/871553604652240948/1230098501753638994 I can not access this thread. On what discord server / channel is this ? @CamFlyerCH it's in the mqtt channel of the official Meshtastic Discord. You can find the link to it on https://meshtastic.org as generating invite links is disabled.
gharchive/pull-request
2024-04-23T04:28:48
2025-04-01T06:44:47.271134
{ "authors": [ "CamFlyerCH", "GMart", "liamcottle" ], "repo": "liamcottle/meshtastic-map", "url": "https://github.com/liamcottle/meshtastic-map/pull/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
423391594
[Question] How do I select a font? There is no setting in the configuration example to set the font to be used. How do I set it to a font I like? Also are bitmap fonts supported (e.g. terminus font)? Closed due to complete rewrite as part of bringing the project back to life, please create a new issue if still relevant. Thank you! Closed due to complete rewrite as part of bringing the project back to life, please create a new issue if still relevant. Thank you! Is there any info on the rewrite? Cool to see progress on this project! Hey @pinpox - nothing more than is in the readme at the moment, but I'd like to invest some more time on it now. What sort of thing did you have in mind? I can put a blog post or something together if there is any interest. I can put a blog post or something together if there is any interest. Would like to read that! I'm just generally interested in the motivation and state of the project as well as a roadmap on what the plans are and where it's going. Have a great day!
gharchive/issue
2019-03-20T17:50:27
2025-04-01T06:44:47.274198
{ "authors": [ "liamg", "pinpox" ], "repo": "liamg/darktile", "url": "https://github.com/liamg/darktile/issues/272", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1423727057
代码滚动时和行号重叠 描述问题 #39 按照此方法打开滚动条,代码滚动时会和行号重叠 截图 系统 (建议使用最新Typora版本) OS: windows 22H2 build 22621.675 Theme: Drake Typora Version: 1.4.8 可能是Windows版本特有的问题, 等我抽空用Windows复现下问题 已修复亮色主题存在的此问题
gharchive/issue
2022-10-26T09:35:19
2025-04-01T06:44:47.287829
{ "authors": [ "liangjingkanji", "xtsang" ], "repo": "liangjingkanji/DrakeTyporaTheme", "url": "https://github.com/liangjingkanji/DrakeTyporaTheme/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
55782972
ListView + MaterialStyle Hi, Great component! :+1: I tried implementing the material style with listview. Got it working. However I can scroll down but not scroll up the list. The list would refresh instead of showing previous row. Here's my code <in.srain.cube.views.ptr.PtrFrameLayout android:id="@+id/material_style_ptr_frame" xmlns:cube_ptr="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" cube_ptr:ptr_duration_to_close="300" cube_ptr:ptr_duration_to_close_header="2000" cube_ptr:ptr_keep_header_when_refresh="true" cube_ptr:ptr_pull_to_fresh="false" cube_ptr:ptr_ratio_of_header_height_to_refresh="1.2" cube_ptr:ptr_resistance="1.7"> <ListView android:layout_width="match_parent" android:layout_height="match_parent" android:id="@android:id/list" android:layout_alignParentTop="true" android:layout_alignParentLeft="true" android:layout_alignParentStart="true" android:background="@android:color/white" android:clickable="true" android:choiceMode="singleChoice" android:smoothScrollbar="true" android:clipToPadding="false"/> </in.srain.cube.views.ptr.PtrFrameLayout> Check your checkCanDoRefresh method in PtrHandler. There is some flaws in PtrDefaultHandler. Got it! Thanks! @Override public boolean checkCanDoRefresh(PtrFrameLayout frame, View content, View header) { return PtrDefaultHandler.checkContentCanBePulledDown(frame, content, header); }
gharchive/issue
2015-01-28T16:54:47
2025-04-01T06:44:47.290400
{ "authors": [ "fahrulazmi", "liaohuqiu" ], "repo": "liaohuqiu/android-Ultra-Pull-To-Refresh", "url": "https://github.com/liaohuqiu/android-Ultra-Pull-To-Refresh/issues/44", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2155384129
Optionally cache parsed /proc/<pid>/maps entries during normalization This change adds the necessary logic for caching the parsed /proc//maps data on a per-process basis, when the Normalizer's 'reevaluate_maps' property is false. Having caching in place can help speed up repeated address normalization for the process, but because there does not appear to be a way to detect up-to-dateness of the cached /proc//maps, there exists the potential for correctness issues. main/normalize_process time: [36.494 µs 37.249 µs 38.141 µs] main/normalize_process_no_build_ids time: [28.232 µs 28.482 µs 28.709 µs] main/normalize_process_no_build_ids_cached time: [1.1735 µs 1.1839 µs 1.1934 µs] Will add C API bindings, but straight forward and can be reviewed without them. FYI @simpleton @salvatorebenedetto Going ahead with the merge. Thanks for the inline and in-person comments.
gharchive/pull-request
2024-02-26T23:59:58
2025-04-01T06:44:47.309850
{ "authors": [ "danielocfb" ], "repo": "libbpf/blazesym", "url": "https://github.com/libbpf/blazesym/pull/553", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
481652689
Identify a suitable content type(s) to target for MVP. We need to identify a content type(s) that we will target to support in the MVP. DoD: [ ] A decision on which content type(s) must be supported in the MVP @JGilbert-eLife Could you have a think about some DoD criteria for this issue? Google sheet chart of article types and what they contain: https://docs.google.com/spreadsheets/d/1_MraWwutB1EgSUItXk0C5-Pqp7XZ2ikhhFx9AVjEl64/edit#gid=0
gharchive/issue
2019-08-16T15:03:38
2025-04-01T06:44:47.329615
{ "authors": [ "JGilbert-eLife", "joelsummerfield" ], "repo": "libero/producer", "url": "https://github.com/libero/producer/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }