id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
67108932 | Reinforced Concrete
Will the Concrete blocks and its different tiers readded to ICBM or will they be in an other mod?
Would really like to have them back for a wither farm or a nuclear bunker :D
They are being remade in a WIP mod called military base decor. If you need something as a replacement you could check out my mod personal mod project (not builtbroken): Blastcraft located here: http://www.curse.com/mc-mods/minecraft/224916-blastcraft
| gharchive/issue | 2015-04-08T11:33:17 | 2025-04-01T04:32:20.808633 | {
"authors": [
"Hennamann",
"Tomson124"
],
"repo": "BuiltBrokenModding/ICBM",
"url": "https://github.com/BuiltBrokenModding/ICBM/issues/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
78345106 | Crashes when i load or create a new world
Hello
I got the most recent versions of ICBM and VoltzEngine. I load minecraft and everything is fine. But when I create or load a world it says building terrain then it says shutting down internal server, and crashes the game.
Running Windows 8.1
---- Minecraft Crash Report ----
// Quite honestly, I wouldn't worry myself about that.
Time: 5/19/15 8:51 PM
Description: Unexpected error
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:540)
at java.nio.DirectIntBufferU.get(DirectIntBufferU.java:253)
at net.minecraft.client.renderer.RenderGlobal.func_72712_a(RenderGlobal.java:350)
at net.minecraft.client.renderer.RenderGlobal.func_72732_a(RenderGlobal.java:294)
at net.minecraft.client.Minecraft.func_71353_a(Minecraft.java:2216)
at net.minecraft.client.Minecraft.func_71403_a(Minecraft.java:2146)
at net.minecraft.client.network.NetHandlerPlayClient.func_147282_a(NetHandlerPlayClient.java:240)
at net.minecraft.network.play.server.S01PacketJoinGame.func_148833_a(SourceFile:70)
at net.minecraft.network.play.server.S01PacketJoinGame.func_148833_a(SourceFile:13)
at net.minecraft.network.NetworkManager.func_74428_b(NetworkManager.java:212)
at net.minecraft.client.Minecraft.func_71407_l(Minecraft.java:2061)
at net.minecraft.client.Minecraft.func_71411_J(Minecraft.java:973)
at net.minecraft.client.Minecraft.func_99999_d(Minecraft.java:898)
at net.minecraft.client.main.Main.main(SourceFile:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at net.minecraft.launchwrapper.Launch.launch(Launch.java:135)
at net.minecraft.launchwrapper.Launch.main(Launch.java:28)
A detailed walkthrough of the error, its code path and all known details is as follows:
-- Head --
Stacktrace:
at java.nio.Buffer.checkIndex(Buffer.java:540)
at java.nio.DirectIntBufferU.get(DirectIntBufferU.java:253)
at net.minecraft.client.renderer.RenderGlobal.func_72712_a(RenderGlobal.java:350)
at net.minecraft.client.renderer.RenderGlobal.func_72732_a(RenderGlobal.java:294)
at net.minecraft.client.Minecraft.func_71353_a(Minecraft.java:2216)
at net.minecraft.client.Minecraft.func_71403_a(Minecraft.java:2146)
at net.minecraft.client.network.NetHandlerPlayClient.func_147282_a(NetHandlerPlayClient.java:240)
at net.minecraft.network.play.server.S01PacketJoinGame.func_148833_a(SourceFile:70)
at net.minecraft.network.play.server.S01PacketJoinGame.func_148833_a(SourceFile:13)
at net.minecraft.network.NetworkManager.func_74428_b(NetworkManager.java:212)
-- Affected level --
Details:
Level name: MpServer
All players: 0 total; []
Chunk stats: MultiplayerChunkCache: 0, 0
Level seed: 0
Level generator: ID 01 - flat, ver 0. Features enabled: false
Level generator options:
Level spawn location: World: (8,64,8), Chunk: (at 8,4,8 in 0,0; contains blocks 0,0,0 to 15,255,15), Region: (0,0; contains chunks 0,0 to 31,31, blocks 0,0,0 to 511,255,511)
Level time: 0 game time, 0 day time
Level dimension: 0
Level storage version: 0x00000 - Unknown?
Level weather: Rain time: 0 (now: false), thunder time: 0 (now: false)
Level game mode: Game mode: creative (ID 1). Hardcore: false. Cheats: false
Forced entities: 0 total; []
Retry entities: 0 total; []
Server brand: ERROR NullPointerException: null
Server type: Integrated singleplayer server
Stacktrace:
at net.minecraft.client.multiplayer.WorldClient.func_72914_a(WorldClient.java:373)
at net.minecraft.client.Minecraft.func_71396_d(Minecraft.java:2444)
at net.minecraft.client.Minecraft.func_99999_d(Minecraft.java:927)
at net.minecraft.client.main.Main.main(SourceFile:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at net.minecraft.launchwrapper.Launch.launch(Launch.java:135)
at net.minecraft.launchwrapper.Launch.main(Launch.java:28)
-- System Details --
Details:
Minecraft Version: 1.7.10
Operating System: Windows 8.1 (amd64) version 6.3
Java Version: 1.8.0_25, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 44557344 bytes (42 MB) / 259624960 bytes (247 MB) up to 4281597952 bytes (4083 MB)
JVM Flags: 6 total; -XX:HeapDumpPath=MojangTricksIntelDriversForPerformance_javaw.exe_minecraft.exe.heapdump -Xmx4G -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:-UseAdaptiveSizePolicy -Xmn128M
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
FML: MCP v9.05 FML v7.10.140.1401 Minecraft Forge 10.13.3.1401 5 mods loaded, 5 mods active
mcp{9.05} [Minecraft Coder Pack] (minecraft.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
FML{7.10.140.1401} [Forge Mod Loader] (forge-1.7.10-10.13.3.1401-1710ls.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
Forge{10.13.3.1401} [Minecraft Forge] (forge-1.7.10-10.13.3.1401-1710ls.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
VoltzEngine{0.3.3} [Voltz Engine] (VoltzEngine-1.7.10-0.3.3b230-dev.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
icbm{2.4.0.240} [ICBM] (ICBM-1.7.10-2.4.0b240-dev.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
GL info: ' Vendor: 'Intel' Version: '4.0.0 - Build 10.18.10.3621' Renderer: 'Intel(R) HD Graphics 4000'
Launched Version: 1.7.10-Forge10.13.3.1401-1710ls
LWJGL: 2.9.1
OpenGL: Intel(R) HD Graphics 4000 GL version 4.0.0 - Build 10.18.10.3621, Intel
GL Caps: Using GL 1.3 multitexturing.
Using framebuffer objects because OpenGL 3.0 is supported and separate blending is supported.
Anisotropic filtering is supported and maximum anisotropy is 16.
Shaders are available because OpenGL 2.1 is supported.
Is Modded: Definitely; Client brand changed to 'fml,forge'
Type: Client (map_client.txt)
Resource Packs: [Jon New New.zip]
Current Language: Canadian English (Canada)
Profiler Position: N/A (disabled)
Vec3 Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
Anisotropic Filtering: Off (1)
This is being run on a normal forge setup right?
Does this error happen every time you create a new world, or did it only happen once. It looks like it's a error within minecraft itself, i'm guessing a hardware/firmware issue on your side. I'm unable to recreate the error myself, so any information you could add would be appreciated.
Closing due to no additional info, and in ability to replicate. Re-open with more info
I'm new to github so im not really sure how it works, but with the latest build of ICBM I also produce a crash with a normal setup and I have tested It when removing ICBM and it works. So whenever i do create a new world my game just crashes, but it works fine on old worlds.
Could you post a crash report like the one above?, perhaps your's contain more information on the crash.
Hmmmm I just attempted to create a new world and unlike before the game no longer crashes but it is extremely laggy to the point where i cant really do anything, but that might be because of another mod. If it happens again ill let you know
Lag is probably another mod or hardware, try tweaking the settings. I'l be closing this again but feel free to comment if you get more information.
---- Minecraft Crash Report ----
// There are four lights!
Time: 15/06/15 17:43
Description: Rendering screen
java.lang.NullPointerException: Rendering screen
at com.arisux.airi.Updater$1.drawWindowContents(Updater.java:141)
at com.arisux.airi.lib.render.windowapi.themes.Theme.drawContents(Theme.java:49)
at com.arisux.airi.lib.render.windowapi.themes.Theme.drawWindow(Theme.java:26)
at com.arisux.airi.lib.render.windowapi.themes.ThemeDefault.drawWindow(ThemeDefault.java:17)
at com.arisux.airi.lib.windowapi.WindowAPI.drawWindow(WindowAPI.java:54)
at com.arisux.airi.lib.windowapi.WindowManager.func_73863_a(WindowManager.java:168)
at net.minecraft.client.renderer.EntityRenderer.func_78480_b(EntityRenderer.java:1351)
at weather2.weathersystem.EntityRendererProxyWeather2Mini.func_78480_b(EntityRendererProxyWeather2Mini.java:50)
at net.minecraft.client.Minecraft.func_71411_J(Minecraft.java:990)
at net.minecraft.client.Minecraft.func_99999_d(Minecraft.java:887)
at net.minecraft.client.main.Main.main(SourceFile:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at net.minecraft.launchwrapper.Launch.launch(Launch.java:135)
at net.minecraft.launchwrapper.Launch.main(Launch.java:28)
A detailed walkthrough of the error, its code path and all known details is as follows:
-- Head --
Stacktrace:
at com.arisux.airi.Updater$1.drawWindowContents(Updater.java:141)
at com.arisux.airi.lib.render.windowapi.themes.Theme.drawContents(Theme.java:49)
at com.arisux.airi.lib.render.windowapi.themes.Theme.drawWindow(Theme.java:26)
at com.arisux.airi.lib.render.windowapi.themes.ThemeDefault.drawWindow(ThemeDefault.java:17)
at com.arisux.airi.lib.windowapi.WindowAPI.drawWindow(WindowAPI.java:54)
at com.arisux.airi.lib.windowapi.WindowManager.func_73863_a(WindowManager.java:168)
-- Screen render details --
Details:
Screen name: com.arisux.airi.lib.windowapi.WindowManager
Mouse location: Scaled: (257, 135). Absolute: (1030, 538)
Screen size: Scaled: (480, 270). Absolute: (1920, 1080). Scale factor of 4
Stacktrace:
at net.minecraft.client.renderer.EntityRenderer.func_78480_b(EntityRenderer.java:1351)
at weather2.weathersystem.EntityRendererProxyWeather2Mini.func_78480_b(EntityRendererProxyWeather2Mini.java:50)
at net.minecraft.client.Minecraft.func_71411_J(Minecraft.java:990)
at net.minecraft.client.Minecraft.func_99999_d(Minecraft.java:887)
at net.minecraft.client.main.Main.main(SourceFile:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at net.minecraft.launchwrapper.Launch.launch(Launch.java:135)
at net.minecraft.launchwrapper.Launch.main(Launch.java:28)
-- System Details --
Details:
Minecraft Version: 1.7.10
Operating System: Windows 7 (amd64) version 6.1
Java Version: 1.8.0_25, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 1081287704 bytes (1031 MB) / 1860173824 bytes (1774 MB) up to 5726797824 bytes (5461 MB)
JVM Flags: 2 total; -XX:HeapDumpPath=MojangTricksIntelDriversForPerformance_javaw.exe_minecraft.exe.heapdump -Xmx6G
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
FML: MCP v9.05 FML v7.10.85.1230 Minecraft Forge 10.13.2.1230 Optifine OptiFine_1.7.10_HD_A4 50 mods loaded, 50 mods active
mcp{9.05} [Minecraft Coder Pack] (minecraft.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
FML{7.10.85.1230} [Forge Mod Loader] (forge-1.7.10-10.13.2.1230.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
Forge{10.13.2.1230} [Minecraft Forge] (forge-1.7.10-10.13.2.1230.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
appliedenergistics2-core{rv1-stable-1} [AppliedEnergistics2 Core] (minecraft.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
CodeChickenCore{1.0.2.9} [CodeChicken Core] (minecraft.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
{000} [CoFH ASM Data Initialization] (minecraft.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
TooManyItems{1.7.10} [TooManyItems] (minecraft.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
AIRI{3.0.0-1.7.10} [AIRI] (AIRI.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
AliensVsPredator{3.9.14-1.7.10} [AliensVsPredator] (AliensVsPredator.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
appliedenergistics2{rv1-stable-1} [Applied Energistics 2] (appliedenergistics2-rv1-stable-1.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ArchimedesShips{1.7.10 v1.7.1} [Archimedes' Ships] (ArchimedesShips-1.7.1.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
betterstorage{0.10.3.115} [BetterStorage] (BetterStorage-1.7.10-0.10.3.115.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
CoFHCore{1.7.10R3.0.0B6} [CoFH Core] (CoFHCore-[1.7.10]3.0.0B6-32.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ThermalFoundation{1.7.10R1.0.0B3} [Thermal Foundation] (ThermalFoundation-[1.7.10]1.0.0B3-8.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ThermalExpansion{1.7.10R4.0.0B5} [Thermal Expansion] (ThermalExpansion-[1.7.10]4.0.0B5-13.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BigReactors{0.4.0rc8} [Big Reactors] (BigReactors-0.4.0rc8.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildCraft|Core{6.0.18} [BuildCraft] (buildcraft-6.0.18.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildCraft|Builders{6.0.18} [BC Builders] (buildcraft-6.0.18.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildCraft|Energy{6.0.18} [BC Energy] (buildcraft-6.0.18.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildCraft|Factory{6.0.18} [BC Factory] (buildcraft-6.0.18.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildCraft|Transport{6.0.18} [BC Transport] (buildcraft-6.0.18.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildCraft|Silicon{6.0.18} [BC Silicon] (buildcraft-6.0.18.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
BuildMod{v1.0} [Build Mod] (CoroUtil-1.1.1.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
CoroAI{v1.0} [CoroAI] (CoroUtil-1.1.1.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ExtendedRenderer{v1.0} [Extended Renderer] (CoroUtil-1.1.1.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ConfigMod{v1.0} [Extended Mod Config] (CoroUtil-1.1.1.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
customnpcs{1.7.10b} [CustomNpcs] (CustomNPCs_1.7.10b.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
dualhotbar{1.6} [Dual Hotbar] (dualhotbar-1.7.10-1.6.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
eureka{2.2} [Eureka] (Eureka-1.7.10-2.2.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
VoltzEngine{0.3.3} [Voltz Engine] (VolzEngine-1.7.10-0.3.3b16.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
icbm{2.4.0.250} [ICBM] (ICBM-1.7.10-2.4.0b250-dev.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
mcheli{0.9.3} [MC Helicopter] (mcheli) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
UniversalElectricity{@MAJOR@.@MINOR@.@REVIS@} [Universal Electricity] (universal-electricity-4.0.0.88-core.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ForgeMultipart{1.1.0.307} [Forge Multipart] (ForgeMultipart-1.7.10-1.1.0.307-universal.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
Mekanism{7.1.0} [Mekanism] (Mekanism-1.7.10-7.1.0.92.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
MekanismGenerators{7.1.0} [MekanismGenerators] (MekanismGenerators-1.7.10-7.1.0.92.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
MekanismTools{7.1.0} [MekanismTools] (MekanismTools-1.7.10-7.1.0.92.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
minechem{5.0.5.229} [Minechem] (Minechem-1.7.10-5.0.5.229.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Core{4.5.0.50} [ProjectRed] (ProjectRed-Base-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Transmission{4.5.0.50} [ProjectRed-Transmission] (ProjectRed-Integration-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Transportation{4.5.0.50} [ProjectRed-Transportation] (ProjectRed-Mechanical_beta-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Exploration{4.5.0.50} [ProjectRed-Exploration] (ProjectRed-World-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Compatibility{4.5.0.50} [ProjectRed-Compatibility] (ProjectRed-Compat-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Integration{4.5.0.50} [ProjectRed-Integration] (ProjectRed-Integration-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Illumination{4.5.0.50} [ProjectRed-Illumination] (ProjectRed-Lighting-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ProjRed|Expansion{4.5.0.50} [ProjectRed-Expansion] (ProjectRed-Mechanical_beta-1.7.10-4.5.0.50.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
rivalrebels{1.7.10D} [Rival Rebels] (rivalrebels-1.7.10D.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
weather2{v2.3.6} [weather2] (Weather-2.3.7.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
McMultipart{1.1.0.307} [Minecraft Multipart Plugin] (ForgeMultipart-1.7.10-1.1.0.307-universal.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
ForgeMicroblock{1.1.0.307} [Forge Microblocks] (ForgeMultipart-1.7.10-1.1.0.307-universal.jar) Unloaded->Constructed->Pre-initialized->Initialized->Post-initialized->Available->Available->Available->Available
AE2 Version: stable rv1-stable-1 for Forge 10.13.0.1187
AE2 Integration: IC2:OFF, RotaryCraft:OFF, RC:OFF, BC:ON, MJ6:ON, MJ5:ON, RF:ON, RFItem:ON, MFR:OFF, DSU:ON, FZ:OFF, FMP:ON, RB:OFF, CLApi:OFF, Waila:OFF, InvTweaks:OFF, NEI:OFF, CraftGuide:OFF, Mekanism:ON, ImmibisMicroblocks:OFF, BetterStorage:ON
Launched Version: 1.7.10-Forge10.13.2.1230
LWJGL: 2.9.1
OpenGL: GeForce GTX 770/PCIe/SSE2 GL version 4.5.0 NVIDIA 353.06, NVIDIA Corporation
GL Caps: Using GL 1.3 multitexturing.
Using framebuffer objects because OpenGL 3.0 is supported and separate blending is supported.
Anisotropic filtering is supported and maximum anisotropy is 16.
Shaders are available because OpenGL 2.1 is supported.
Is Modded: Definitely; Client brand changed to 'fml,forge'
Type: Client (map_client.txt)
Resource Packs: [[1.7.10] Flow's HD 64x]
Current Language: English (US)
Profiler Position: N/A (disabled)
Vec3 Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
Anisotropic Filtering: Off (1)
Running windows 7 with 16GB of RAM and a 4GB GPU.
That crash report seems to be unrelated to this issue. It also looks to not be from ICBM or any of it's dependencies.
From what I can tell from google the crash is most likely related to this mod http://coros.us/mods/weather2
| gharchive/issue | 2015-05-20T03:03:04 | 2025-04-01T04:32:20.868438 | {
"authors": [
"DarkGuardsman",
"Hennamann",
"WinstonChurchill",
"jobensingh",
"sakej99"
],
"repo": "BuiltBrokenModding/ICBM",
"url": "https://github.com/BuiltBrokenModding/ICBM/issues/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2315386478 | Feature: Level Selection Menu
Description
The level selection screen shows the user all levels they can play.
The student will select a level
The student will go back to the main menu
Functional Requirements
[ ] It should be able to show the levels the student can play
[x] It should be able to select a level
[x] It should be able to link a scene to a level
Acceptance Criteria
[x] There should be a level selection scene
Dependencies
None
Definition of Done
[ ] Follows the naming conventions
[ ] Follows the coding conventions
[ ] Meets the acceptance criteria
[ ] Pull request has been opened
Reuse from C1
How will progression between levels work?
None, you can just choose a level in this case
| gharchive/issue | 2024-05-24T13:13:09 | 2025-04-01T04:32:20.890864 | {
"authors": [
"MisakiCopperrose",
"Tig709"
],
"repo": "Burning-Equations/Function-Dungeon-II",
"url": "https://github.com/Burning-Equations/Function-Dungeon-II/issues/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2021413526 | Mobile views
dashboard
add sample top
add sample bottom
search
wygląda bardzo dobrze, dałbyś jeszcze radę ogarnąć menu, bo aktualnie nie działa na mniejszych ekranach?
| gharchive/pull-request | 2023-12-01T18:49:15 | 2025-04-01T04:32:20.931571 | {
"authors": [
"Buzeqq",
"qaziok"
],
"repo": "Buzeqq/TERMINAL",
"url": "https://github.com/Buzeqq/TERMINAL/pull/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2500908870 | Cannot schedule a task: no free thread (timeout=0) (threads=10000, jobs=10000)
(you don't have to strictly follow this form)
Bug Report
Briefly describe the bug
2024.09.02 18:05:31.628863 [ 193978 ] {} <Error> PlanSegmentExecutor: [eeb644f5-c148-4dd7-951d-f02b95a9c3ec_2]: Query has excpetion with code: 439, detail
: Code: 439, e.displayText() = DB::Exception: Cannot schedule a task: no free thread (timeout=0) (threads=10000, jobs=10000) SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x27227652 in /data1/byconity/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x10780640 in /data1/byconity/clickhouse
2. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, unsigned long&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long&&, unsigned long&) @ 0x107c0f52 in /data1/byconity/clickhouse
3. void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda'(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::operator()(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x107c0b1e in /data1/byconity/clickhouse
4. void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>) @ 0x107bb3a1 in /data1/byconity/clickhouse
5. ThreadPoolImpl<std::__1::thread>::scheduleOrThrow(std::__1::function<void ()>, int, unsigned long) @ 0x107bb9a6 in /data1/byconity/clickhouse
6. void std::__1::allocator_traits<std::__1::allocator<ThreadFromGlobalPool> >::construct<ThreadFromGlobalPool, DB::PipelineExecutor::executeImpl(unsigned long)::$_5>(std::__1::allocator<ThreadFromGlobalPool>&, ThreadFromGlobalPool*, DB::PipelineExecutor::executeImpl(unsigned long)::$_5&&) @ 0x2231cae2 in /data1/byconity/clickhouse
7. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x22318dcf in /data1/byconity/clickhouse
8. DB::PipelineExecutor::execute(unsigned long) @ 0x22318b25 in /data1/byconity/clickhouse
9. DB::PlanSegmentExecutor::doExecute(std::__1::shared_ptr<DB::ThreadGroupStatus>) @ 0x216ceb0e in /data1/byconity/clickhouse
10. DB::PlanSegmentExecutor::execute(std::__1::shared_ptr<DB::ThreadGroupStatus>) @ 0x216cd530 in /data1/byconity/clickhouse
11. DB::executePlanSegmentInternal(std::__1::unique_ptr<DB::PlanSegmentInstance, std::__1::default_delete<DB::PlanSegmentInstance> >, std::__1::shared_ptr<DB::Context>, bool) @ 0x216c6b80 in /data1/byconity/clickhouse
12. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PlanSegmentManagerRpcService::submitPlanSegment(google::protobuf::RpcController*, DB::Protos::SubmitPlanSegmentRequest const*, DB::Protos::ExecutePlanSegmentResponse*, google::protobuf::Closure*)::$_1>(DB::PlanSegmentManagerRpcService::submitPlanSegment(google::protobuf::RpcController*, DB::Protos::SubmitPlanSegmentRequest const*, DB::Protos::ExecutePlanSegmentResponse*, google::protobuf::Closure*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x216ac916 in /data1/byconity/clickhouse
13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x107bcb80 in /data1/byconity/clickhouse
14. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x107c0ffa in /data1/byconity/clickhouse
15. start_thread @ 0x7e25 in /usr/lib64/libpthread-2.17.so
16. clone @ 0xfebad in /usr/lib64/libc-2.17.so
(version 21.8.7.1)
The result you expected
How to Reproduce
负载不高,正常查询情况,突然报错,其中一个节点异常
Version
0.4.1
可以发下对应查询吗?
这个需要看下system.processes表,先确认下是不是有查询卡死。然后如果查询没有卡死的话,可以用
# 先找到对应的进程id
$ ps -ef |grep clickhouse-server
$ kill -s SIGUSR2 $clickhouse-server_pid
异常节点重启后恢复了,下次节点线程池满了,我根据这个方法排查下
这个dump文件可以生成core文件,然后可以用gdb获取所有线程的堆栈,你可以发我一份
./usr/breakpad/bin/minidump-2-core [dump_file] > test.core
gdb -q ./usr/bin/clickhouse ./test.core -ex "set pagination off" -ex "set print thread-events off" -ex "thread apply all bt" -ex "quit" > gdb.threads
@bright-zy 请问这块还有什么疑问吗?
@smmsmm1988 后面再没出现,我在观察下
| gharchive/issue | 2024-09-02T12:54:07 | 2025-04-01T04:32:20.939713 | {
"authors": [
"bright-zy",
"smmsmm1988",
"superhail"
],
"repo": "ByConity/ByConity",
"url": "https://github.com/ByConity/ByConity/issues/1844",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1795087909 | implement bitmap functions
Implement the BitMap functions using the underlying Bitset library that is used by many projects directly and indirectly.
Hello, thank you for your continuous contribution to our community. If you are interested, we sincerely invite you to join our Slack group. We can discuss the current problems and improvement space of community projects in the Slack group, and discuss the design of distributed systems together! This is our invitation link! https://join.slack.com/t/bytestoragehq/shared_invite/zt-1z46kfp9c-7W1lYKM7urCp8HdgWwvxdA
thanks
| gharchive/pull-request | 2023-07-08T19:59:55 | 2025-04-01T04:32:20.990354 | {
"authors": [
"qishenonly",
"saeid-a",
"sjcsjc123"
],
"repo": "ByteStorage/FlyDB",
"url": "https://github.com/ByteStorage/FlyDB/pull/173",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2665340770 | namespace is automatically specified as required
To fix namespace not specified. Specify a namespace in the module's build file.
Thank you very much for your contribution. It has been adapted in v1.2.1 :https://pub.dev/packages/app_installer
| gharchive/pull-request | 2024-11-17T06:02:46 | 2025-04-01T04:32:20.993339 | {
"authors": [
"BytesZero",
"nespjin"
],
"repo": "BytesZero/app_installer",
"url": "https://github.com/BytesZero/app_installer/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1337943701 | Fixes
closes #10 and #8
Letting this run for a while before merging
| gharchive/pull-request | 2022-08-13T12:05:00 | 2025-04-01T04:32:20.994071 | {
"authors": [
"C-3PFLO"
],
"repo": "C-3PFLO/flovatar-twitter-bot",
"url": "https://github.com/C-3PFLO/flovatar-twitter-bot/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
591809548 | Use package.json version for ZIP version
Get the version value from the package.json and append it to the zip file name
https://github.com/C-Lodder/lightning/commit/37cb053099a37733b0bc02e78f8ccd933d411f83
| gharchive/issue | 2020-04-01T10:33:17 | 2025-04-01T04:32:20.998763 | {
"authors": [
"C-Lodder"
],
"repo": "C-Lodder/lightning",
"url": "https://github.com/C-Lodder/lightning/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1801418599 | Can't see any emoji
I downloaded the file Blobmoji.ttf from the fonts directory. I opened the file in FontLab and no of the emojis are there?
It's probably just that the tool (or maybe your OS) does not support this kind of color font 🤷. The emojis are definetly there.
@C1710, I think he's talking about how this happens, he forgot to change it:
https://github.com/C1710/blobmoji/assets/92538982/613faf35-6928-4574-a070-e6e1996cf176
| gharchive/issue | 2023-07-12T17:19:45 | 2025-04-01T04:32:21.001261 | {
"authors": [
"C1710",
"Sayhone721",
"o-t-w"
],
"repo": "C1710/blobmoji",
"url": "https://github.com/C1710/blobmoji/issues/141",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2352964028 | 🛑 C21CoasttoCoast is down
In 5eb8829, C21CoasttoCoast (https://www.c21coasttocoast.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: C21CoasttoCoast is back up in 4b7d913 after 54 minutes.
| gharchive/issue | 2024-06-14T09:36:48 | 2025-04-01T04:32:21.003587 | {
"authors": [
"C21coastsh"
],
"repo": "C21coastsh/ws-ut",
"url": "https://github.com/C21coastsh/ws-ut/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2020476223 | Sysconfig update of Tasna and Alps' cray-mpich
None of our tests cover these changes, I think.
@dominichofer I suggest to only wait for the confirmation of @huppd that at least ICON works with no mpi-gpu to merge. I suggest not to wait for fixing mpi-gpu, since osm could already start to use such a release to move on with the reforcast.
| gharchive/pull-request | 2023-12-01T09:32:43 | 2025-04-01T04:32:21.004585 | {
"authors": [
"dominichofer",
"lxavier"
],
"repo": "C2SM/spack-c2sm",
"url": "https://github.com/C2SM/spack-c2sm/pull/882",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2397532305 | [KM] AI and Blockchain
KM撰寫
完成時間: 7 hr
https://github.com/CAFECA-IO/KnowledgeManagement/blob/master/Blockchain/ai_and_blockchain.md
| gharchive/issue | 2024-07-09T08:37:56 | 2025-04-01T04:32:21.024550 | {
"authors": [
"JodieWu"
],
"repo": "CAFECA-IO/KnowledgeManagement",
"url": "https://github.com/CAFECA-IO/KnowledgeManagement/issues/220",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314605424 | CAF-4213: Changing image to install java-11-openjdk
@dermot-hardy Can you also rename the repo to opensuse-jre11-image?
JIRA: https://jira.autonomy.com/browse/CAF-4213
Dev build: http://sou-jenkins2.hpeswlab.net/job/CAFapi/view/Developer/job/CAFapi~opensuse-jre9-image~CAF-4213~CI/
Updated project structure so that a java 11 jdk image can easily be added. Updated pom links with new repo name.
| gharchive/pull-request | 2018-04-16T11:14:04 | 2025-04-01T04:32:21.035644 | {
"authors": [
"michael-bryson"
],
"repo": "CAFapi/opensuse-java11-images",
"url": "https://github.com/CAFapi/opensuse-java11-images/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1399164810 | Requesting support for Debian bullseye
Hi,
The instructions on https://bgpstream.caida.org/docs/install/bgpstream do not work for Debian bullseye. The install scripts add add a repo https://pkg.caida.org/os/debian/dists/bullseye/main to /etc/apt/sources.list.d/caida, but this repo does not actually exist yet -> you get a 404.
One could probably try to downgrade to buster or build from source, but support for bullseye would be much appreciated.
+1
And now bookworm is available. In /etc/apt/sources.list.d/caida.list I just set the distro to buster instead of bookworm, seems to fine for now.
Packages have now been built for Debian Bullseye (as well as Debian Bookworm and Ubuntu Jammy) and are available in the CAIDA package repository. If you still have the repository configured then you should be able to refresh the package index and install the package: sudo apt-get update && sudo apt-get install bgpstream. Let me know if you still have issues getting packages.
| gharchive/issue | 2022-10-06T09:08:47 | 2025-04-01T04:32:21.038559 | {
"authors": [
"brendonj",
"jtkristoff",
"mikrologic",
"ralphholz"
],
"repo": "CAIDA/libbgpstream",
"url": "https://github.com/CAIDA/libbgpstream/issues/229",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
835958405 | make failed
So I am new to CANopen and wanted to run the examples/basicDevice but when I enter make, I am getting an error
../../CANopenNode/socketCAN/CO_main_basic.c:28:30: fatal error: bits/getopt_core.h: No such file or directory
after this I tried to make in the CANopenNode respository and getting the same error as well. (I am running this on Ubuntu Mate)
P.S. I already have a hardware CAN Bus where I am reading the data from a sensor by ESP32 and sending it over CAN to a Linux system. Now I want to implement CANopen on this
Just remove unnecessary line with #include <bits/getopt_core.h>. (#include <unistd.h> should be enough.)
I fixed CANopenNode.
| gharchive/issue | 2021-03-19T12:53:08 | 2025-04-01T04:32:21.040872 | {
"authors": [
"CANopenNode",
"glalit15"
],
"repo": "CANopenNode/CANopenSocket",
"url": "https://github.com/CANopenNode/CANopenSocket/issues/26",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2062859697 | 🛑 Test Broken Site is down
In 55ee6fb, Test Broken Site (https://radcord.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Test Broken Site is back up in f563530 after 18 minutes.
| gharchive/issue | 2024-01-02T19:38:41 | 2025-04-01T04:32:21.051221 | {
"authors": [
"CASPERg267"
],
"repo": "CASPERg267/Uptime-page",
"url": "https://github.com/CASPERg267/Uptime-page/issues/1524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1648477360 | Update pipeline to run without contrasts
Currently run_contrasts is an option, but pipeline v2.0.0 assumes run_contrasts is set to "Y".
Solution:
Update rules and associated output files within Snakemake file to filter for if config["run_contrasts"] == "Y":
create_contrast_peakcaller_files
go_enrichment
complete with commit cf05634a240fb4cc5dd41a09e26e0312487afcf7
| gharchive/issue | 2023-03-31T00:03:15 | 2025-04-01T04:32:21.078403 | {
"authors": [
"slsevilla"
],
"repo": "CCBR/CARLISLE",
"url": "https://github.com/CCBR/CARLISLE/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
151784273 | CLI commands don't return with exit status 1 when they fail
Just noticed that my ansible-haas script was still passing Travis CI even if the haas init_db command in the script section was failing (as it doesn't exist anymore). The CLI is exiting with 0 even if the command failed.
@knikolla , @gsilvis - how deep should the fix for this look for errors?
The easy fix for this exact problem (running a non-existent command) is to stick a sys.exit(1) before the help() in this:
if len(sys.argv) < 2 or sys.argv[1] not in command_dict:
# Display usage for all commands
help()
Fancier would be to stick a sys.exit(1) for non-success in the http status check in check_status_code(), though something could break if the do_*() call wasn't the last thing done. A quick glance makes me think that doesn't happen, though.
@henn I think that doesn't cover all the cases. Maybe better to put it here in the except block?
def cmd(f):
"""A decorator for CLI commands.
This decorator firstly adds the function to a dictionary of valid CLI
commands, secondly adds exception handling for when the user passes the
wrong number of arguments, and thirdly generates a 'usage' description and
puts it in the usage dictionary.
"""
@wraps(f)
def wrapped(*args, **kwargs):
try:
f(*args, **kwargs)
except TypeError:
# TODO TypeError is probably too broad here.
sys.stderr.write('Invalid arguements. Usage:\n')
help(f.__name__)
command_dict[f.__name__] = wrapped
Closing, since we just merged a fix.
| gharchive/issue | 2016-04-29T02:58:01 | 2025-04-01T04:32:21.095327 | {
"authors": [
"henn",
"knikolla",
"zenhack"
],
"repo": "CCI-MOC/hil",
"url": "https://github.com/CCI-MOC/hil/issues/583",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
271040143 | Style legal-agreement on install
(re WebAIM 1.4.8)
Resolved by 63772947054e61df3d907dd680de149288749dae
| gharchive/issue | 2017-11-03T16:32:53 | 2025-04-01T04:32:21.102460 | {
"authors": [
"AABoyles"
],
"repo": "CDCgov/MicrobeTRACE",
"url": "https://github.com/CDCgov/MicrobeTRACE/issues/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2353993437 | Track new rst files in git
Intentially. At some point, we only wanted to keep track of index.rst, but when new tutorials are added, git status was not reporting new rst files generated by .pre-commit-rst-placeholder.sh.
Now that I think of it, perhaps we could modify the script to also add newly generated rst files via git add?
Originally posted by @gvegayon in https://github.com/CDCgov/multisignal-epi-inference/pull/158#discussion_r1637056061
@damonbayer, flagging this for your awareness. Is this still relevant?
Yes, but the tutorials will get overhauled when we implement https://github.com/CDCgov/PyRenew/issues/231.
| gharchive/issue | 2024-06-14T19:52:02 | 2025-04-01T04:32:21.105333 | {
"authors": [
"damonbayer",
"gvegayon"
],
"repo": "CDCgov/PyRenew",
"url": "https://github.com/CDCgov/PyRenew/issues/194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1661061120 | Move PHINMS feed from Azure devops repo to our Github
right now, we're having some issues connecting our EDAV datafactory that pulls data from phinms into our Az storage accounts to be linked with our github,
Temporarily, the datafactory is hooked to https://dev.azure.com/xlr-team/cs-pipeline/_git/phinms-feed
We would like to migrate that to our github repo under cdcgov/data-exchange-hl7
I've sent an initial email to Ravi (Mettupalli, Ravindra (CDC/OCOO/OCIO/DSO) (CTR) bcv7@cdc.gov) and awaiting resonse
This should be done. Just need to review some details, about folders.
| gharchive/issue | 2023-04-10T16:56:18 | 2025-04-01T04:32:21.107635 | {
"authors": [
"mscaldas2012",
"ssk2cdcgov"
],
"repo": "CDCgov/data-exchange-hl7",
"url": "https://github.com/CDCgov/data-exchange-hl7/issues/801",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2336980752 | Refactor convolve scanner factory functions
Ready for review.
This PR:
Renames new_double_scanner to new_double_convolve_scanner for consistency.
Closes #147 by adding transform functionality to new_convolve_scanner (default the identity transform), and adding the identity transform as the default for both transforms in new_double_convolve_scanner.
Improves documentation for both scanner factories.
Adds a unit test for the convolve scanner functions (which previously did not have one).
Out of scope
More tests needed but I think that is a separate issue/PR.
Also separate: a factory for convolve scanners that carry along either the full history (more general / less efficient) or cumulative incidence (less general / more efficient). That is needed to implement susceptibility adjustments that depend on keeping track of cumulative incidence (e.g. the logistic_susceptibility_adjustment)
Should we check the size/shape/type of inputs?
@damonbayer are you talking about checking the inputs to the factories or about having the functions they produce/return perform input checks?
Things that occur to me:
Check that the double scanner tuples are of exactly length two. As written, it will error if they are less.
Check that the dists are ArrayLike and the transforms are callable.
Check that the pair of dists are of the same length (in the future, we might want to have the double scanner just use history subsets of the length of the longer array, and automagically sub-subset when taking the dot product with the shorter array, but that feels a bit implicit.
Stricter stuff that I'm more reluctant to implement:
Could try to check that the transforms are shape-preserving for jax arrays, but that's a bit tricky.
Could force the arrays that go into the dot products to be flat.
@dylanhmorris Sounds good to me.
| gharchive/pull-request | 2024-06-05T23:21:46 | 2025-04-01T04:32:21.112936 | {
"authors": [
"damonbayer",
"dylanhmorris"
],
"repo": "CDCgov/multisignal-epi-inference",
"url": "https://github.com/CDCgov/multisignal-epi-inference/pull/161",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1155847002 | added QC report local module
Generates a formatted csv file for each read SEPARATELY
Change this to use debug txt file output from faqcs to remove the dependency on pdftotext and reading pdf files.
| gharchive/pull-request | 2022-03-01T21:41:49 | 2025-04-01T04:32:21.114050 | {
"authors": [
"LeuThrAsp",
"mciprianoCDC"
],
"repo": "CDCgov/mycosnp-nf",
"url": "https://github.com/CDCgov/mycosnp-nf/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1779974500 | Content update draft: Getting started
User story
As a ReportStream prospective/current user, I want the website content to be up-to-date, accurate and timely so I can trust the information to inform my decisions in my relationship with ReportStream.
Background & context
After completing basic outlines for key pages and IDing the new site structure, we need to create the actual content that will go in the wireframes.
Open questions
Working links
draft outline
Acceptance criteria
[ ] Update service blueprint with relevant learnings
[ ] Update the questions pool with any questions that came up
[x] Content plans discussed with content owner
[x] Drafted started and reviewed with content owner and/or SME
@jillian-hammer fyi
@audreykwr seen, thank you!
Draft in review so closing
| gharchive/issue | 2023-06-29T01:30:26 | 2025-04-01T04:32:21.121124 | {
"authors": [
"audreykwr",
"jillian-hammer"
],
"repo": "CDCgov/prime-reportstream",
"url": "https://github.com/CDCgov/prime-reportstream/issues/10170",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1938936894 | Updates to dashboard description on managing your connection
User story
As a ReportStream prospective/current user, I want the website content to be up-to-date, accurate and timely so I can trust the information to inform my decisions in my relationship with ReportStream.
Background & context
With the new dashboard live, we need to edit content on Managing your connection to reflect that instead of Daily Data.
New content under "For public health agencies"
The data dashboard shows what you’ve received and allows you to dive into detail for each facility or report.
[View your dashboard] (and update that link to be the dashboard instead of daily data)
New content on the card at the top, instead of "View Daily Data" make "View data dashboard" (and update that link to be the dashboard instead of daily data)
Open questions
When we might need a different solution due to it being a COVID-only dashboard and a few users still accessing Daily Data.
Working links
Acceptance criteria
[x] Content updated on card at the top and in the monitoring tools section as described above
[ ] Change the image that says "Daily Data" to the new image provided by design
@chris-kuryak FYI, a small site fix Penny helped flag
@jillian-hammer Is there a new card image for replacing the one that says "Daily Data"?
new image asset below:
Private Zenhub Image
| gharchive/issue | 2023-10-11T23:49:12 | 2025-04-01T04:32:21.126060 | {
"authors": [
"audreykwr",
"chris-kuryak",
"jillian-hammer"
],
"repo": "CDCgov/prime-reportstream",
"url": "https://github.com/CDCgov/prime-reportstream/issues/11751",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1746427056 | Internal Discovery
User story
As a Receiver, I need to configure my setting so that I can ensure that I am receiving the data I want from Senders.
Background & context
This discovery process seeks to determine the range of variations in how STLTs might seek to configure their ReportStream connection. This will allow us to determine the best approach to efficiently support STLTs in establishing custom configurations for each of their unique contexts. Discovery will include an analysis of a range of STLTs reporting policies, interviews with STLT representatives, and information gathering from internal subject matter experts. This information will be triangulated to inform engineering and product decisions on how best to meet our STLTs' needs.
Open questions
What are the current customization options available to Receivers?
What are the most common customization options?
For each option, what are the different configurations that are available?
How significant is the variation in terms of STLT collection requirements?
What does this variation suggest in terms of designing a process or feature to configure UP?
What is our users’ level of comfort with self-service configurations?
What is technically feasible in terms of self-service configurations?
Working links
Flu Pilot ReportStream Outreach Plan
Full ELR Technical Questions Sheet
UP vs. Old Pipeline
Self-service Discovery Plan
Discovery Mural
ReportStream CSV and HL7 Field Requirements (version 3.0)
Onboarding to RS: Technical Settings Questions
UP Technical Documentation Skeleton
Acceptance criteria
[ ] Update service blueprint with relevant learnings
[ ] Update the questions pool with any questions that came up
[ ] Written synthesis of discovery findings
@lizzieamanning for your review
Up to date mural
Discovery is done pending STLT research. The mural in the comment above contains all up to date discover information. Will begin developing a plan for STLT research.
Research Plan in Progress
| gharchive/issue | 2023-06-07T18:08:22 | 2025-04-01T04:32:21.135063 | {
"authors": [
"jakefishbein-navapbc"
],
"repo": "CDCgov/prime-reportstream",
"url": "https://github.com/CDCgov/prime-reportstream/issues/9757",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1189873761 | Add offboarding checklist template
Pull reviewers stats
Stats for the last 30 days:
User
Total reviews
Median time to review
Total comments
carlosfelix2
24
2h 55m
97
MauriceReeves-usds
24
58m
12
jimduff-usds
17
4h 29m
9
kevinhaube
16
50m
14
whytheplatypus
14
18h 33m
21
TurkeyFried
11
1h 37m
2
TomNUSDS
11
21m
17
cwinters-usds
11
1h 39m
5
sethdarragile6
10
1h 22m
3
ahay-agile6
9
1h 7m
0
jbiskie
9
1h 11m
3
jorg3lopez
7
1h 19m
5
clediggins-usds
7
25m
0
oslynn
5
20m
1
brick-green-agile6
5
2h 16m
3
rhood23699
4
1d 15h 28m
0
snesm
3
1d 3h 16m
1
bgantick
2
1h 9m
3
JosiahSiegel
2
43m
0
acoushawk
2
39m
0
meckila
2
8h 26m
9
jh765
2
2h 6m
8
jeremy-page
1
18h 23m
0
Oh, other task ideas:
Reassign any GH issues, PRs to others.
Resolve if there are active GH branches that need to be kept and delete unused ones.
Are there any ongoing email/slack threads with external agencies/states that need picking up by someone else.
| gharchive/pull-request | 2022-04-01T14:24:01 | 2025-04-01T04:32:21.172945 | {
"authors": [
"TomNUSDS",
"bgantick"
],
"repo": "CDCgov/prime-reportstream",
"url": "https://github.com/CDCgov/prime-reportstream/pull/4989",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1377757549 | 🛑 CERIT-SC User Documentation is down
In b448c80, CERIT-SC User Documentation (https://docs.cerit.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CERIT-SC User Documentation is back up in b5164df.
| gharchive/issue | 2022-09-19T10:45:11 | 2025-04-01T04:32:21.233105 | {
"authors": [
"c3r1t-b0t"
],
"repo": "CERIT-SC/uptime",
"url": "https://github.com/CERIT-SC/uptime/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1404174284 | It is possiable that mount the CESS client as a filesystem on Linux VFS
Hi, I am a new fans of decentrilized storage. These days, I am working for BeeGFS cache system. In my opinion, I belive that the decentrilized storage will be a filesystem which can be mount on current calculating stack for example Linux. Moreever, we can take advantage of Apache Arrow or other warehouse components to contribute a decentrilized warehouse.
But the problem is that if CESS can be mount as a filesystem on VFS. I am willing to contribute this feature to CESS if it is possiable .
Thx for ur reply soon later
@g302ge Yes, this is a great idea. At present, cess still relies on the Linux file system as its storage. If you can contribute this function to cess, we welcome you very much, and thank you very much for your contribution to the open source storage of cess. You can directly submit your pr.
@g302ge Yes, this is a great idea. At present, cess still relies on the Linux file system as its storage. If you can contribute this function to cess, we welcome you very much, and thank you very much for your contribution to the open source storage of cess. You can directly submit your pr.
Sure, I will work for this soon
| gharchive/issue | 2022-10-11T07:56:52 | 2025-04-01T04:32:21.246280 | {
"authors": [
"AstaFrode",
"g302ge"
],
"repo": "CESSProject/cess-portal",
"url": "https://github.com/CESSProject/cess-portal/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1186154218 | Task: Add loading content to Concentrator list page
Feature #509
There seems to be an inconsistent behaviour between DeviceList and ConcentratorList pages.
Despite the pages having the exact same code, one is showing loading progression, whereas the second one isn't.
I'm not sure what is happening exacty, but I may have found an explanation here: https://github.com/MudBlazor/MudBlazor/pull/3711
| gharchive/issue | 2022-03-30T09:25:06 | 2025-04-01T04:32:21.294574 | {
"authors": [
"audserraCGI",
"kbeaugrand"
],
"repo": "CGI-FR/IoT-Hub-Portal",
"url": "https://github.com/CGI-FR/IoT-Hub-Portal/issues/522",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
877268176 | Allow vertical spacing to be set on rows.
I'm using a MultiSelectChipField with scroll set to false which means items flow to multiple rows when required. I can use itemBuilder to control the horizontal spacing between items but I can't find a way to control the vertical spacing between rows.
Would it be possible to add a way to set this?
It turns out my issue was due to some extra padding on a child I was adding. I've now solved the issue although it may still be useful to be able to set the spacing between items.
| gharchive/issue | 2021-05-06T08:59:43 | 2025-04-01T04:32:21.299310 | {
"authors": [
"trilby-gavin"
],
"repo": "CHB61/multi_select_flutter",
"url": "https://github.com/CHB61/multi_select_flutter/issues/36",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1038003288 | US 01.08.02 [NEW]
Story
As a doer, I want to manually reorder habits on my list of habits.
Implementation
Be able to reorder and store habits in the habit list. Need to make sure the stored changes are ordered in the way the user left them.
finished
| gharchive/issue | 2021-10-28T01:54:53 | 2025-04-01T04:32:21.337969 | {
"authors": [
"Napkinzz",
"wei-dey"
],
"repo": "CMPUT301F21T40/Routines",
"url": "https://github.com/CMPUT301F21T40/Routines/issues/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1364642637 | adding properties panel
https://github.com/CODAIT/nlp-editor/issues/94
Add Global Editor Properties
General panel contains Module Name
Language panel contains action button to invoke language selection modal
Fixes CodeQL integration
created the following issues Senthil
https://github.com/CODAIT/nlp-editor/issues/96
https://github.com/CODAIT/nlp-editor/issues/97
https://github.com/CODAIT/nlp-editor/issues/98
| gharchive/pull-request | 2022-09-07T13:10:35 | 2025-04-01T04:32:21.371310 | {
"authors": [
"JesusGuerrero"
],
"repo": "CODAIT/nlp-editor",
"url": "https://github.com/CODAIT/nlp-editor/pull/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2465131076 | generalise support for cice filenames
Per https://github.com/COSIMA/access-om3/issues/190#issuecomment-2261753828 allow for support of more variety in filenames and don't overwrite existing files
Closes #21
The tests pass on gadi and fail on the github runner !
I think this is ready to go @minghangli-uni
| gharchive/pull-request | 2024-08-14T07:44:03 | 2025-04-01T04:32:21.379120 | {
"authors": [
"anton-seaice"
],
"repo": "COSIMA/om3-scripts",
"url": "https://github.com/COSIMA/om3-scripts/pull/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1467482086 | Support for Matchfile Version 1.0.0
This pull request contains a complete overhaul of the import and export capabilities for matchfiles according to the new specification for version 1.0.0. This pull request contains the following changes
add the following files
matchfile_base.py: Comple re-definition of the base (abstract) classes for match lines. The
matchfile_utils.py contains utilities for parsing and formatting the attributes for the different match lines
matchfile_v0.py contains the definitions of the match lines in versions < 1.0.0 (e.g., 5.0).
matchfile_v1.py contains the definition of match lines in versions >= 1.0.0 (up to but not including a future 2.0.0)
importmatch.py and exportmatch.py have been re-structured to use the new definitions. The load_match method supports loading all known match files, while save_match will just export versions starting from 1.0.0. Support for exporting older versions will only be added if there is enough interest.
The attributes of the matchlines have been renamed to follow the specification published in the paper presented at the MEC2022 conference.
Extensive tests for parsing matchlines, add test for exporting matchfiles (and updated example match file in tests to version 1.0.0)
Adresses the following issues:
#178: An "n" is added at the beginning of note ids of performed notes if there is none.
#89: The new MatchKeySignature class can parse and generate the key signatures according to multiple formats (tested an all key signatures from the Magaloff, Zeilinger, Batik and Vienna 4x22)
#86: The save_match method does not make use of measure numbers from the part, but instead creates new numbers sequentially according to the starting time of the measure
Codecov Report
Base: 81.53% // Head: 85.15% // Increases project coverage by +3.61% :tada:
Coverage data is based on head (b20a0a6) compared to base (f446753).
Patch coverage: 93.51% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## develop #192 +/- ##
===========================================
+ Coverage 81.53% 85.15% +3.61%
===========================================
Files 63 68 +5
Lines 10654 12046 +1392
===========================================
+ Hits 8687 10258 +1571
+ Misses 1967 1788 -179
Impacted Files
Coverage Δ
partitura/io/importmusicxml.py
73.96% <ø> (ø)
partitura/score.py
79.57% <0.00%> (-0.20%)
:arrow_down:
partitura/io/matchlines_v1.py
82.38% <82.38%> (ø)
partitura/io/matchlines_v0.py
85.90% <85.90%> (ø)
partitura/io/exportmatch.py
83.04% <88.79%> (+71.47%)
:arrow_up:
partitura/utils/music.py
74.91% <91.66%> (+1.10%)
:arrow_up:
partitura/io/importmatch.py
81.92% <93.97%> (+9.08%)
:arrow_up:
partitura/io/matchfile_base.py
95.65% <95.65%> (ø)
partitura/io/matchfile_utils.py
97.43% <97.43%> (ø)
tests/test_match_export.py
100.00% <100.00%> (ø)
... and 5 more
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2022-11-29T05:36:48 | 2025-04-01T04:32:21.405744 | {
"authors": [
"codecov-commenter",
"neosatrapahereje"
],
"repo": "CPJKU/partitura",
"url": "https://github.com/CPJKU/partitura/pull/192",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1436926453 | Example for maximal propagate set of constraints
Calculating maximal consequence in CPMpy.
I think adding assumption variables with an instantiated solver is a bit overkill for this example, I like to keep it quite simple and readable, yet functional
The algorithm does not give the intersection of all solutions, but instead it gives the union of all possible values for all variables. E.g., for x \in {0,1,2,3} and constraint x>1 the algorithm will reply x \in {2,3}, while the intersection of solutions would be x \not \in {0,1}. The difference is trivial, but both interpretations lead to a different algorithm. The intersection one would add one disjunction at each iteration, while the current adds a disjunction of conjunctions, so arguably the intersection one is more efficient. But it requires finite domains. We can discuss this later (hard to explain in a comment), the algorithm is fine for now.
The algorithm does not give the intersection of all solutions, but instead it gives the union of all possible values for all variables. E.g., for x \in {0,1,2,3} and constraint x>1 the algorithm will reply x \in {2,3}, while the intersection of solutions would be x \not \in {0,1}. The difference is trivial, but both interpretations lead to a different algorithm. The intersection one would add one disjunction at each iteration, while the current adds a disjunction of conjunctions, so arguably the intersection one is more efficient. But it requires finite domains. We can discuss this later (hard to explain in a comment), the algorithm is fine for now.
Ah yes I see, I added the alternative to the file as well. This will indeed be more efficient probably. Should we keep both versions in the file or just pick one?
Surprisingly, the disjuctive (propagate_alt) version seems to be slower compared to the original method. I think it has partly to do with the size of the disjunction in early iterations of the algorithm. With large domains it is huge, while in the original version, the disjunction starts out small and gets bigger over time. So I would suggest to just keep the first version if that's ok with you?
So I would suggest to just keep the first version if that's ok with you?
That's ok with me.
One last thing is the signature, which is currently
def maximal_propagate(vars=None, constraints=[], solvername="ortools"):
I propose:
def maximal_propagate(constraints, vars=None, solver="ortools"):
Because required arguments should go before optional ones (and not have default values); and because .solve() has 'solver=str' as argument. So for consistency I would propose it here as well. I did the same when creating the 'mus' tool, there too it is is 'solver=str' (I did hesitate about it)
Finally,
should this be part of our 'tools' too, instead of an advanced example?
We use it in our explanation work, it will be needed for a stepwise-explain advanced example or tool, so maybe just make it a tool straight away?
(and would it then have both implementations? It could...
Algorithmically and interface-wise, the PR is in good shape. I cannot see any test of the functionality. Should this still be added or am I missing something?
Looks great, I guess some advanced examples could now be changed to use this?
| gharchive/pull-request | 2022-11-05T09:41:41 | 2025-04-01T04:32:21.412933 | {
"authors": [
"IgnaceBleukx",
"JoD",
"tias"
],
"repo": "CPMpy/cpmpy",
"url": "https://github.com/CPMpy/cpmpy/pull/147",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1504245584 | 191 mus tool crashes with only 1 soft constraint
from cpmpy import *
from cpmpy.tools.mus import mus
bv = boolvar()
mus(soft=[bv], hard=[~bv]) # crashes
Source of the error is assum.implies(candidates). When assum is a single boolvar and candidates is a list (of length 1), it fails.
We should check when invoking bv.implies(rhs) rhs is either a single expression or a list of length 1, and extract this value I think. Or change to all(candidates) in the mus tool, but this might have some side effect by itself... to check.
When assum is a single boolvar, it now converts the assumption variable to a NDVarArray of length 1.
Added test case for naive mus and solved issue with unit soft constraint.
Will merge
| gharchive/pull-request | 2022-12-20T09:25:34 | 2025-04-01T04:32:21.415282 | {
"authors": [
"IgnaceBleukx",
"sourdough-bread"
],
"repo": "CPMpy/cpmpy",
"url": "https://github.com/CPMpy/cpmpy/pull/196",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
213449535 | Show boost criteria in "Get X Boost" CS quests
My proposal is to simply add the inspiration/eureka for the corresponding tech boost - for example, "$CS wants the Eureka for Feudalism" -> "$CS wants the Eureka for Feudalism (Create 6 farms)"
@brianschamel @chaorace I will look into this one later next week. This has also been bothering me.
@Remolten Any news?
@chaorace None yet. I'll look into it when I get time (maybe next week).
| gharchive/issue | 2017-03-10T20:54:23 | 2025-04-01T04:32:21.420455 | {
"authors": [
"Remolten",
"brianschamel",
"chaorace"
],
"repo": "CQUI-Org/cqui",
"url": "https://github.com/CQUI-Org/cqui/issues/385",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
437743442 | Fix endless loop in GPSPropagator.java
If x.getValue() or/and x0.getValue() is Double.NaN this is a endless loop.
Thanks for the patch! Could you add a unit test?
Hi all,
Many thanks for finding and correcting the issue.
The GitHub repository is only a mirror repository for Orekit. In that respect, it is not really adapted for pull request.
Can you create an account on the Orekit GitLab repository ( https://gitlab.orekit.org/orekit/orekit ) ?
Then, create an issue on: https://gitlab.orekit.org/orekit/orekit/issues
Finally, send a merge request on the GitLab repository
Many thanks
Bryan
Hi JosefProbst,
Your patch does not compile in my computer. I have an error when I call the method .equals() with a "double" parameter : " Cannot invoke equals(double) on the primitive type double "
I moved the issue on the Orekit GitLab repository. We remarked that this endless loop can also occur on KeplerianOrbit.java and FieldKeplerianOrbit.java classes.
Thanks again for finding the issue.
Bryan
The issue has been fixed in develop branch (https://gitlab.orekit.org/orekit/orekit/issues/544).
Thanks again for your contribution on the bug reporting. I notified your contribution in the change.xml file of Orekit.
Best regards,
Bryan
| gharchive/pull-request | 2019-04-26T16:07:00 | 2025-04-01T04:32:21.428921 | {
"authors": [
"BryanCazabonne",
"JosefProbst",
"wardev"
],
"repo": "CS-SI/Orekit",
"url": "https://github.com/CS-SI/Orekit/pull/21",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
239992601 | V0.0 kevin - all V0.0 changes
Merging to main V0.0 branch
DeveloperGuide.adoc is missing a newline at the end of the file.
Coverage remained the same at 89.093% when pulling 0945647dba1adc0cb74dd78d0f8b5c9c020fe587 on V0.0_kevin into b3b6a340a1e32442b408f86c57439b42504dc29d on V0.0.
Should we change "Main Success Scenario" to just "MSS"? We have both versions right now.
| gharchive/pull-request | 2017-07-02T06:32:11 | 2025-04-01T04:32:21.452833 | {
"authors": [
"coveralls",
"kevinLamKB",
"mattheuslee"
],
"repo": "CS2103JUN2017-T2/main",
"url": "https://github.com/CS2103JUN2017-T2/main/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2640331482 | [General]: Semantic confusion on BB level
Contact Details
No response
Problem Description
Reviewing the set of collected and new (PR) building blocks, I think we have a too wide definition of building block - currently, building blocks range from extremely high-level/abstract concepts, to very specific implementations of single functionalities, to whole large (commercial) solutions that incorporate many actual building blocks.
In my opinion, this is at best confusing - at worst, it negates the value of the catalog that FEDERATE is trying to assemble.
There are probably a few approaches to address this - tbd in the group. My proposal:
Define BBs to really only be "specific and abstract pieces of functionality that can be used to compose solutions or projects"
Pull out all solutions and implementation projects from the BB catalog into a separate structure ("implementations" or "solutions"), where the entries then have to reference all the abstract building blocks they are implementing
Hi @AnotherDaniel I have understood BBs and an abstract umbrella term. To make it even more confusing, in my understanding a BB can be also an interface. Then you may have multiple BBs implementing this interface. Is that what you mean by "specific and abstract pieced of functionality..."?
But I agree with you. Maybe categorizing BBs into different structures may help. Maybe we need some kind of relationship model - consists of, is part of, ...; BB vs BB clusters?
| gharchive/issue | 2024-11-07T08:57:47 | 2025-04-01T04:32:21.456361 | {
"authors": [
"AnotherDaniel",
"jankubovy"
],
"repo": "CSA-FEDERATE/Proposed-BuildingBlocks",
"url": "https://github.com/CSA-FEDERATE/Proposed-BuildingBlocks/issues/36",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
374786137 | Use without gpu
Is it possible to use your implementation without a gpu? As I do not have a cuda supporting device I wondered what steps would be necessary to run at least the test.py on cpu only.
#117
| gharchive/issue | 2018-10-28T18:56:38 | 2025-04-01T04:32:21.458034 | {
"authors": [
"hangzhaomit",
"jub92"
],
"repo": "CSAILVision/semantic-segmentation-pytorch",
"url": "https://github.com/CSAILVision/semantic-segmentation-pytorch/issues/119",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1387930024 | DM-5.3 Create the Database
Associated with Epic #9
As a member of the team, I want to be able to access the database through SQL Developer in VMWare, so that I can see the data related to the project.
Acceptance Criteria:
The database is created in SQL Developer.
Everyone on the team has access to the database.
The database includes the entities: users, food items, and food ratings.
Tasks:
Go to SQL Developer in VMWare and make a new database
Create a password that everyone has
Add entities
| gharchive/issue | 2022-09-27T15:10:05 | 2025-04-01T04:32:21.466130 | {
"authors": [
"benhoeschen",
"eyouso"
],
"repo": "CSBSJU-CS330-F22/Dining-Menu",
"url": "https://github.com/CSBSJU-CS330-F22/Dining-Menu/issues/23",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
268473773 | ansible 2.4 preparation
Ansible 2.4 is here. Kill the warnings.
I think it would generally make sense to wait for Ansible 2.5 in order to give some time for users to upgrade to 2.4. Changes LGTM though.
Centos7 got ansible 2.4.0 now, and it's pretty vocal in the warnings about import being deprecated from 2.4 onwards.
| gharchive/pull-request | 2017-10-25T16:57:19 | 2025-04-01T04:32:21.469950 | {
"authors": [
"lae",
"tiggi"
],
"repo": "CSCfi/ansible-role-cuda",
"url": "https://github.com/CSCfi/ansible-role-cuda/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
874315906 | WIP Create sd_desktop.md
SD desktop chapter
There has been no development in this fork for more than a month so I'm closing it. Also, the only change was adding one title. I propose starting SD Desktop documentation from a fresh branch made from the latest master and as a branch and not as a fork. If there are any questions, please let me know!
| gharchive/pull-request | 2021-05-03T07:34:16 | 2025-04-01T04:32:21.471159 | {
"authors": [
"attesillanpaa",
"fmorelloCSC"
],
"repo": "CSCfi/csc-user-guide",
"url": "https://github.com/CSCfi/csc-user-guide/pull/932",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
220115048 | set nodeSelector for metrics pods
Heapster needs to be run on infra-nodes to have access to node API.
The new openshift_metrics -role in installer supports setting
nodeSelectors, so we set them in the inventory avoiding a manual step
after installation.
Looked at the code, had a short discussion over coffee about the context. Looks good to me.
| gharchive/pull-request | 2017-04-07T05:28:35 | 2025-04-01T04:32:21.472664 | {
"authors": [
"rlaurika",
"tourunen"
],
"repo": "CSCfi/pouta-ansible-cluster",
"url": "https://github.com/CSCfi/pouta-ansible-cluster/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1723105524 | Front dev
Merging sprint2 frontend updates to main
Merge completed
| gharchive/pull-request | 2023-05-24T03:09:44 | 2025-04-01T04:32:21.473587 | {
"authors": [
"mickjeon"
],
"repo": "CSE110-Team17/cse110-sp23-group17",
"url": "https://github.com/CSE110-Team17/cse110-sp23-group17/pull/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2477884630 | Merge automation
merge the automation branch with main branch - automation branch contains numerous bugfixes and improvements by Vit Sebela that were created in gitlab and moved first into a separate branch instead of overwriting the repo as it is... there might be some conflicts
It seems that it was almost conflict-free. The only conflict was encountered in the crusoe_observe/ansible/roles/configuration/templates/conf.ini file. I still want to try to run the CRUSOE components locally (which seems not as straightforward) before I merge it to main to verify whether it is actually working, but there should not be any problems, assuming the deployment was updated correctly in the automation branch.
The intermediate state ready to be merged is in automation_main_rebase branch.
Well, it seems that the automation branch is broken. Compared to the author's repository https://gitlab.fi.muni.cz/xsebela3/CRUSOE it seems not all changes were put into this branch. The first major indicator was missing neo4j-rest-wrapper.conf file, which is required by the neo4j role here but was not copied to this repo. There were already small inconsistencies before. No Vagrant + Ansible finished correctly so far.
It seems merging the branches is not enough, I have to go through Vit Sebelas repository again and pick up the missing things and inconsistencies.
Merged to crusoe_update branch together with all other changes, will continue in a separate issue
| gharchive/issue | 2024-08-21T12:12:42 | 2025-04-01T04:32:21.484682 | {
"authors": [
"husak",
"michal-cech"
],
"repo": "CSIRT-MU/CRUSOE",
"url": "https://github.com/CSIRT-MU/CRUSOE/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1077678851 | board page interactions
[ ] initial dice roll - call init for now, roll_count -= 1
[ ] second & third dice roll - need to know which dice are selected and reroll accordingly, rollcount -1
[x] save score - know which button was pressed, save corresponding field in score box (check that field is null first, throw error if not null), rounds remaining -1
I think that's it for interactions, everything else can be handled on the page
I'll also make sure that the function calls these generate check values to make sure the player is able to do what they are wanting to do (not cheating)
I have the save score figured out, just need to fill in the rest of the fields
| gharchive/issue | 2021-12-12T00:03:21 | 2025-04-01T04:32:21.533265 | {
"authors": [
"abullockuno"
],
"repo": "CYBR8470/Semester-Project",
"url": "https://github.com/CYBR8470/Semester-Project/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
211203383 | Script to release ok-client
Resolves #264, #276.
Test plan:
Clone into a new github repo for testing
Comment out the python setup.py sdist upload
Edit GITHUB_REPO to point to the testing repo
Edit OK_SERVER_URL to point to localhost:5000
Try to "release" the fake client. Should see
Version bump pushed to github
GitHub release with correct binary
Updated client version on localhost:5000
It works https://github.com/okpy/ok-client/releases/tag/v1.10.3
Feel free to try it out there. You'll need to have the server running locally.
This doesn't use the virtualenv python and my python3 is at /usr/local/bin/python3 - seems like some kind of virtualenv bug & python3.6 (See issue 962 in https://github.com/pypa/virtualenv)
Being able to use python (or python3) release.py directly would be nice so that isn't as much of an issue for other people. Currently we can't because
File "release.py", line 73, in <module>
os.chdir(os.path.dirname(__file__))
FileNotFoundError: [Errno 2] No such file or directory: ''
I actually haven't been able to get the script smoothly because of the python issues (and the virtualenv issues). I did get to upload a release though.
Sanity check to keep the version number starting with v1. instead of just v
If we want to change the major version I think it's reasonable to change the release script. It would be super easy for me to accidentally release v11.02 instead of v1.10.2
Also I think there's a dependence on running python3 setup.py develop to get the ok-publish command going if it wasn't before.
From the diff it seems like that part hasn't been added?
This does use the virtualenv python3. usr/bin/env python3 invokes the first executable named python3 on your PATH, which will be the virtualenv python3 if you've activated it. I've fixed running just python3 release.py though.
I don't want to hardcode v1., since it's conceivable that we'll want to release version 2 sometime. How about the major version number should not increase by more than 1?
I don't want to hardcode v1., since it's conceivable that we'll want to release version 2 sometime. How about the major version number should not increase by more than 1?
Sure.
I think I've addressed everything. Any other concerns before I merge?
Nope feel free to merge & then release with this script :)
| gharchive/pull-request | 2017-03-01T20:47:31 | 2025-04-01T04:32:21.622652 | {
"authors": [
"Sumukh",
"knrafto"
],
"repo": "Cal-CS-61A-Staff/ok-client",
"url": "https://github.com/Cal-CS-61A-Staff/ok-client/pull/288",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
407278610 | Please ensure your intercomSettings object is formatted correctly: Missing App ID.
Versions
Angular: 7.0.2
Angular CLI: 7.0.4
ng-intercom: 7.0.0-beta.1
Node.JS: 10.1.0
NPM: 6.7.0
Description
A warning is thrown despite everything working correctly:
Please ensure your intercomSettings object is formatted correctly: Missing App ID.
I spoke to Intercom and they confirmed they do not see this error on their logs.
I've also just started seeing this having had it working for a while.
I've also just started seeing this and it appeared without touching any intercome related code.
@eyalhakim can you put the output from the console in the issue?
This appears to be a change from Intercom, nothing we can really do here until intercom publishes their updated documentation.
@scott-wyatt here is the output:
frame.b9364e05.js:1 Please ensure your intercomSettings object is formatted correctly: Missing App ID. safeConsoleWarn @ frame.b9364e05.js:1 t.createOrUpdateUser @ frame.b9364e05.js:1 t.createOrUpdateUser @ frame.b9364e05.js:1 update @ frame.b9364e05.js:1 (anonymous) @ frame.b9364e05.js:1 processApiQueue @ frame.b9364e05.js:1 (anonymous) @ frame.b9364e05.js:1 p @ frame.b9364e05.js:1 (anonymous) @ frame.b9364e05.js:1 p @ frame.b9364e05.js:1 (anonymous) @ frame.b9364e05.js:1 p @ frame.b9364e05.js:1 m @ frame.b9364e05.js:1 (anonymous) @ frame.b9364e05.js:1 (anonymous) @ frame.b9364e05.js:1
Solved this for now by removing ng-intercom, adding the intercom snippet to index.html and using the following in app.component:
(<any>window).Intercom(‘boot’, { app_id: *** });
Further to this, I subscribed to the current logged in user as well as router events to handle page changes and sending user details, by calling the following:
(<any>window).Intercom('update', { email: 'email@here.com' ... etc });
Shut down intercom by calling:
(<any>window).Intercom('shutdown');
Seems to work.
I have wrapped this all in a service, but I hope this helps anyone in a similar situation whilst this module is being updated.
Sorry guys sorry for breaking this!
Simply adding window.intercomSettings with appId will remove the warning.
I will remove the warning meanwhile to avoid any false positives.
Thanks @danielhusar, we will add window.intercomSettings to the source and push as soon as possible.
| gharchive/issue | 2019-02-06T15:05:54 | 2025-04-01T04:32:21.643767 | {
"authors": [
"JohnnyTMD",
"alexyoungs",
"danielhusar",
"eyalhakim",
"scott-wyatt"
],
"repo": "CaliStyle/ng-intercom",
"url": "https://github.com/CaliStyle/ng-intercom/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
426672141 | GoDot 3.1
it seems many projects/tutorials listed do not work properly in 3.1+
I see a few links listed as 3.1 which is great but the majority I have looked at are pre 3.1 which can be a waste of time to watch or download and realize it won't work in 3.1
all this to say, if you could add a version to the link somewhere, it would help tremendously
thanks!
Solved by #100.
| gharchive/issue | 2019-03-28T19:35:53 | 2025-04-01T04:32:21.646278 | {
"authors": [
"Calinou",
"neosin"
],
"repo": "Calinou/awesome-godot",
"url": "https://github.com/Calinou/awesome-godot/issues/94",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2260995669 | [BUG] The draggable component will display an overlay exception when it is dragged again after it is destroyed and then reappears.
First of all, thank you for your open source work
the following is reproduction
data class ExampleData(val id:Int)
fun TestPage() {
var show by remember { mutableStateOf(true) }
var items by remember { mutableStateOf((1..5).map { i -> ExampleData(i) }) }
val lazyListState = rememberLazyListState()
val reorderableLazyColumnState = rememberReorderableLazyColumnState(lazyListState) { from, to ->
items = items.toMutableList().apply {
add(to.index, removeAt(from.index))
}
}
Column(
modifier = Modifier.fillMaxSize()
) {
Spacer(modifier = Modifier.height(40.dp))
TextButton(onClick = { show = !show }) {
Text("Toggle-$show")
}
HorizontalDivider()
if (show) {
LazyColumn(
state = lazyListState,
modifier = Modifier.fillMaxHeight(),
verticalArrangement = Arrangement.spacedBy(4.dp),
) {
itemsIndexed(items, { _, item -> item.id }) { index, item ->
ReorderableItem(
reorderableLazyColumnState,
key = item.id,
) {
val interactionSource = remember { MutableInteractionSource() }
Card(
onClick = {
},
modifier = Modifier
.longPressDraggableHandle(
interactionSource = interactionSource,
onDragStopped = {
},
)
.fillMaxWidth()
.padding(vertical = 3.dp, horizontal = 8.dp),
interactionSource = interactionSource,
) {
Text(
"$index. id=${item.id}",
modifier = Modifier.padding(16.dp)
)
}
}
}
}
}
}
}
Oh interesting. I am able to recreate this on my end. I am looking into it.
This will result in the loss of scrolling status when redisplaying
This is not a bug with this library. If you remove rememberReorderableLazyColumnState and ReorderableItem, and add Modifier.animateItemPlacement (which this library uses) to the Card, then swap two items in the list, it won't work. Consider saving the scroll position separately. Feel free to report this bug to Google's issue tracker: https://issuetracker.google.com/issues?q=status:open componentid:612128&s=created_time:desc
In Compose Foundation v1.7.0-alpha07 with the new Modifier.animateItem() this is fixed.
ok, thanks
| gharchive/issue | 2024-04-24T10:48:41 | 2025-04-01T04:32:21.665697 | {
"authors": [
"Calvin-LL",
"lisonge"
],
"repo": "Calvin-LL/Reorderable",
"url": "https://github.com/Calvin-LL/Reorderable/issues/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
325508035 | StatisticalResult significance can be misleading
I'm not sure what the rationale behind dropping StatisticalResult.is_significant was, but now things are bit confusing:
is_significant is still referenced in docs/Examples.rst.
utils.significance_code() doesn't take the alpha into account, so results that are starred by StatisticalResult.print_summary() don't account for multiple comparisons when the Bonferroni correction is used.
Hm, you make some good points. Let me investigate this further. I may want to cut a new release to fix this.
I'm curious: how would you expect the stars to work, given a Bonferroni correction?
Good question. I wonder if stars make sense when the alpha isn't 0.95. A quick search yielded these results – seems like people just use words:
https://depts.washington.edu/psych/files/writing_center/stats.pdf
https://stats.stackexchange.com/questions/325847/bonferroni-correction-vs-asterisks-signifying-significance-levels
By the way, having StatisticalResults.is_significant would be useful in any case. For example, I use the following code to summarise the results of a pairwise_logrank_test call as the percentage of significant pairs. Being able to write res.is_significant rather than res.p_value < 1 - res.alpha would be nice. :slightly_smiling_face:
results = pairwise_logrank_test(event_durations, groups, event_observed).applymap(
lambda res: 1.0 if res and res.p_value < 1 - res.alpha else 0.0
)
num_pairs = len(results) * (len(results) - 1) // 2
return len(results), num_pairs, results.sum().sum() / 2 / num_pairs
| gharchive/issue | 2018-05-23T00:28:37 | 2025-04-01T04:32:21.681460 | {
"authors": [
"CamDavidsonPilon",
"yanirs"
],
"repo": "CamDavidsonPilon/lifelines",
"url": "https://github.com/CamDavidsonPilon/lifelines/issues/465",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1488686442 | [Bug] Build failure on debian
Hi, I'm in the process of packaging lfs-core in order to get lfs into debian. it build fine, but the tests in src/lib.rs fail:
running 1 test
test src/lib.rs - (line 4) ... FAILED
failures:
---- src/lib.rs - (line 4) stdout ----
error[E0061]: this function takes 1 argument but 0 arguments were supplied
--> src/lib.rs:6:18
|
5 | let mut mounts = lfs_core::read_mounts().unwrap();
| ^^^^^^^^^^^^^^^^^^^^^-- an argument of type `&ReadOptions` is missing
|
note: function defined here
help: provide the argument
|
5 | let mut mounts = lfs_core::read_mounts(/* &ReadOptions */).unwrap();
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error[E0599]: no method named `is_some` found for enum `Result` in the current scope
--> src/lib.rs:8:27
|
7 | mounts.retain(|m| m.stats.is_some());
| ^^^^^^^ help: there is an associated function with a similar name: `is_ok`
error: aborting due to 2 previous errors
rustc 1.62.1
Right. I'll fix that.
Compilation fixed. Thanks for the report.
I don't know your packaging process. Do you need me to fix the import in lfs to ensure the new version is checked ?
thanks. I can just import the diff as patch :)
| gharchive/issue | 2022-12-10T17:01:06 | 2025-04-01T04:32:21.731039 | {
"authors": [
"Canop",
"werdahias"
],
"repo": "Canop/lfs-core",
"url": "https://github.com/Canop/lfs-core/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1826545743 | Add an Accessibility Statement
Statement to go in the footer of the website outlining the project's accessibility guidelines.
what will the guidelines be based on?
My suggestion would be basing them on the ww3 accessibility guidelines seeing as they're generally accepted to be some of the most comprehensive.
| gharchive/issue | 2023-07-28T14:38:18 | 2025-04-01T04:32:21.732362 | {
"authors": [
"CanopusFalling",
"jackcarey"
],
"repo": "CanopusFalling/Queer-Calendar-Sheffield",
"url": "https://github.com/CanopusFalling/Queer-Calendar-Sheffield/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2241877909 | guernseyed, fireweed
This pull request was generated by the 'mq' tool
/trunk merge
| gharchive/pull-request | 2024-04-14T03:04:48 | 2025-04-01T04:32:21.748841 | {
"authors": [
"tong-canva"
],
"repo": "Canva/mergequeue-trunk",
"url": "https://github.com/Canva/mergequeue-trunk/pull/1876",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
125927259 | Add check script for marathon DNS for traefik
Fixes #595
code looks good to me
| gharchive/pull-request | 2016-01-11T11:53:42 | 2025-04-01T04:32:21.749554 | {
"authors": [
"enxebre",
"tayzlor"
],
"repo": "Capgemini/Apollo",
"url": "https://github.com/Capgemini/Apollo/pull/616",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1330337960 | User provisioning from KC to AAD
Hello. Is it possible to provision users FROM Keycloak TO Azure AD using this plugin? Or does it work from AAD to KC only?
Hey,
currently the implementation offers only one way: from AAD to KC.
See also: https://github.com/Captain-P-Goldfish/scim-for-keycloak/issues/3
keycloak is currently in a state of drastic rebuilding. There are also alternative user storage implementations that will be available soon that might allow to do provisioning into the other direction as well but I haven't checked the details yet
| gharchive/issue | 2022-08-05T19:55:03 | 2025-04-01T04:32:21.762814 | {
"authors": [
"Captain-P-Goldfish",
"jkovaliov"
],
"repo": "Captain-P-Goldfish/scim-for-keycloak",
"url": "https://github.com/Captain-P-Goldfish/scim-for-keycloak/issues/57",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
394129079 | 关于redux的一个比较急的问题
一个页面下拉刷新,其他被StoreBuilder包着的页面都会执行build,大量重复渲染怎么解决啊??
我在A页面dispatch改变B页面的数据,结果被StoreBuilder的页面都build了一遍。。。
额,我这么看也不好说,主要是shying你的地方不要绑定整个store,更新时也仅更新store内的某个对象
还请大神能把一个store拆分开,做到dispatch后只build用到改变的数据的地方,解决dispatch后所有页面都build的问题,万分感谢🙏
| gharchive/issue | 2018-12-26T09:46:36 | 2025-04-01T04:32:21.764337 | {
"authors": [
"CarGuo",
"qq326646683"
],
"repo": "CarGuo/GSYGithubAppFlutter",
"url": "https://github.com/CarGuo/GSYGithubAppFlutter/issues/198",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
869463126 | 暂停后再进行大小屏切换,会出现黑屏
(!!!!请务必务必按照issue模板,修改 后提交问题!!!!,不按模板提Isuue删除处理)
(ps 首页问题集锦和demo请先了解一下!)
提问前建议先看看: https://mp.weixin.qq.com/s/HjSdmAsHuvixCH_EWdvk3Q
问题描述:
同一个播放界面,暂停后再进行大小屏切换,会出现黑屏,希望能停留在暂停的最后一帧
问题机型/系统:
三星S9+ Android10
GSY依赖版本
例如 implementation 'com.shuyu:gsyVideoPlayer-java:8.0.0'
Demo中的复现步骤
问题代码:(如果有)
问题log(如果有)
D/IJKMEDIA: IjkMediaPlayer_setVideoSurface
D/IJKMEDIA: ijkmp_set_android_surface(surface=0x0)
D/IJKMEDIA: ffpipeline_set_surface()
D/IJKMEDIA: ijkmp_set_android_surface(surface=0x0)=void
W/tv.danmaku.ijk.media.player.IjkMediaPlayer: setScreenOnWhilePlaying(true) is ineffective for Surface
D/IJKMEDIA: IjkMediaPlayer_setVideoSurface
D/IJKMEDIA: ijkmp_set_android_surface(surface=0xffa4b620)
D/IJKMEDIA: ffpipeline_set_surface()
D/IJKMEDIA: ijkmp_set_android_surface(surface=0xffa4b620)=void
正常播放时切换的日志是:
D/IJKMEDIA: IjkMediaPlayer_setVideoSurface
D/IJKMEDIA: ijkmp_set_android_surface(surface=0x0)
D/IJKMEDIA: ffpipeline_set_surface()
D/IJKMEDIA: ijkmp_set_android_surface(surface=0x0)=void
W/tv.danmaku.ijk.media.player.IjkMediaPlayer: setScreenOnWhilePlaying(true) is ineffective for Surface
D/IJKMEDIA: IjkMediaPlayer_setVideoSurface
D/IJKMEDIA: ijkmp_set_android_surface(surface=0xffa4bb40)
D/IJKMEDIA: ffpipeline_set_surface()
D/IJKMEDIA: ijkmp_set_android_surface(surface=0xffa4bb40)=void
D/IJKMEDIA: ANativeWindow_setBuffersGeometry: w=1920, h=1080, f=(0x1) => w=640, h=360, f=RV32(0x32335652)
setShowPauseCover
setShowPauseCover
这个方法写入时机要放在哪,我有写了,没效果
一般是播放前, setRenderType 是默认的 TEXTURE 么?
我目前也设置在播放钱,setRenderType用的是TEXTURE
Shuyu Guo @.***> 于2021年4月28日周三 上午11:58写道:
一般是播放前, setRenderType 是默认的 TEXTURE 么?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/CarGuo/GSYVideoPlayer/issues/3234#issuecomment-828124681,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACUIJ46POWRUBVQ42GZRZ5DTK6BX5ANCNFSM43WHBQJQ
.
https://user-images.githubusercontent.com/10770362/116344893-a886fe00-a819-11eb-880c-d1c5d8029d26.mp4
如上视频所示
最终排查跟踪源码解决了,用了GsyVideoManager.instance().pause()去暂停视频播放,导致mCurrentState状态是没有赋值更新的,所以到showPauseCover判断就走不下去,最终采用Player对象去暂停就可以了
good ~
| gharchive/issue | 2021-04-28T03:46:19 | 2025-04-01T04:32:21.773958 | {
"authors": [
"CarGuo",
"F1ReKing"
],
"repo": "CarGuo/GSYVideoPlayer",
"url": "https://github.com/CarGuo/GSYVideoPlayer/issues/3234",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
723788249 | A support case history
Store all the messages used to open recent support cases (maybe the most recent 10 or 20) and link the reopen command to use that list instead of manually searching for messages.
Could then include a recap on case reopen
When reopening a specific member's case, it could reopen the case in the original channel if possible
Perhaps could be used to drag channels out of unavailable to continue the case if need be
This would also fix certain issues:
Open cases made while rebooting would always have all the required info (pertaining to level of persistence)
Reopening while specifying a member would also work better
Update: This will now be done with case IDs.
| gharchive/issue | 2020-10-17T15:52:13 | 2025-04-01T04:32:21.778453 | {
"authors": [
"parafoxia"
],
"repo": "Carberra/Carberretta",
"url": "https://github.com/Carberra/Carberretta/issues/49",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
215989442 | Add support for EC2 Container Service, aka ECS
https://aws.amazon.com/ecs/
Support added for all API operations:
http://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Operations.html
Thanks for a great package!
/Lars
Changes:
list()-types changed to []
used suggested pascalize_keys, but added recursion
atoms in form of :UPPER, :"json-file" and :memberOf converted to :upper, :json_file, :member_of and converted to expected output
Not sure if I'm supposed to ping @benwilson512 - so I'll just do it ;-)
I'm really looking forward to the ECS support. What was the reason for closing this PR?
Lack of response. I guess somebody else has been doing it since readme states "ExAws.ECS (COMING SOON)".
You may be interested in https://github.com/CargoSense/ex_aws/issues/497
@benwilson512, I saw the ticket and I was wondering the same thing with wanting to add RedShift support. I guess the breaking up of ex_aws is passed the proposal stage? If so, I'd be happy to start working on RedShift support as a separate project.
Feedback has been uniformly positive, so I plan on going ahead with the change. I hope to move forward this week with everything, so please feel free to start RedShift as a separate project called ex_aws_redshift and I'll look to include it in the coming organization.
The proposal is a wonderful idea, and I would be happy to do ex_aws_ecs.
I understand your situation completely, ex_aws has grown to something very big.
| gharchive/pull-request | 2017-03-22T08:42:40 | 2025-04-01T04:32:21.799309 | {
"authors": [
"benwilson512",
"larskrantz",
"mayppong"
],
"repo": "CargoSense/ex_aws",
"url": "https://github.com/CargoSense/ex_aws/pull/385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
231198521 | [sqs] Add parsing for error entries in SendMessageBatch response
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatch.html#API_SendMessageBatch_ResponseElements
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_BatchResultErrorEntry.html
References:
https://boto3.readthedocs.io/en/latest/reference/services/sqs.html#SQS.Client.send_message_batch
https://github.com/boto/boto/blob/master/boto/sqs/batchresults.py
https://github.com/aws/aws-sdk-cpp/blob/master/aws-cpp-sdk-sqs/source/model/BatchResultErrorEntry.cpp
https://github.com/aws/aws-sdk-go/blob/0bc99f4aa1cf50245de234b1ce0b33463a3ad5fe/service/sqs/api.go#L3689
Thank you!
| gharchive/pull-request | 2017-05-24T23:20:29 | 2025-04-01T04:32:21.802974 | {
"authors": [
"benwilson512",
"swaroopch"
],
"repo": "CargoSense/ex_aws",
"url": "https://github.com/CargoSense/ex_aws/pull/412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1014271252 | Allow languages to be enabled per a manga instead of for the whole library
Right know you can only tag chapter language for the entire app, which doesn't really work well if you use multiple ones, cause for example I want to read Tokyo Revengers in Portuguese cause there isn't a translation in English , but for me to do that I need to enable Portuguese to the entire app making so all of my other manga have duplicated chapters, duplicated new chapter notifications and automatically download every new chapter twice (one for each language), and this continues to get worse with each adicional one (I use Spanish too so everything is tripled)
That's why I think it would be good if we were able to set the desired language in each manga to avoid this problems
Thinking over this I think it would be better to be able to filter a language out https://github.com/CarlosEsco/Neko/issues/309
| gharchive/issue | 2021-10-03T05:39:05 | 2025-04-01T04:32:21.806321 | {
"authors": [
"Bilacomy",
"CarlosEsco"
],
"repo": "CarlosEsco/Neko",
"url": "https://github.com/CarlosEsco/Neko/issues/701",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
131316033 | Fix down arrow icon in CartoIcon
Currently look like that:
Better if that corner is clear :)
No icon, just CSS fix!
| gharchive/issue | 2016-02-04T11:14:52 | 2025-04-01T04:32:21.820265 | {
"authors": [
"MariaCheca"
],
"repo": "CartoDB/CartoAssets",
"url": "https://github.com/CartoDB/CartoAssets/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
234539234 | Handle Infinity and NaN values for dataviews
Fixes https://github.com/CartoDB/cartodb/issues/12154
[x] Histogram
[x] Histogram (Overview)
[x] Formula
[x] Formula (Overview)
[x] Aggregation
[x] Aggregation (Overview)
Hey @jgoizueta, I've just pushed a couple of commits to support numeric NaN values for dataviews
Great! I think this is ready for acceptance.
| gharchive/pull-request | 2017-06-08T14:06:10 | 2025-04-01T04:32:21.822837 | {
"authors": [
"dgaubert",
"jgoizueta"
],
"repo": "CartoDB/Windshaft-cartodb",
"url": "https://github.com/CartoDB/Windshaft-cartodb/pull/700",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
117790959 | [don't merge] Vector layers in CartoDB.js
Work in progress. Putting up this PR to track progressively.
@javisantana @rochoa @alonsogarciapablo
@rochoa the majority of these comments apply to leaflet-cartodb-group-layer-base.js, as leaflet-vector-cartodb-group-layer-base.js is a modified copy of it.
| gharchive/pull-request | 2015-11-19T11:07:55 | 2025-04-01T04:32:21.824335 | {
"authors": [
"fdansv"
],
"repo": "CartoDB/cartodb.js",
"url": "https://github.com/CartoDB/cartodb.js/pull/830",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
191533836 | Widget configuration error when DYNAMIC is set to NO
Context
The bullet point to set the DYNAMIC parameter to either YES or NO in widget's configuration breaks after selecting NO
Steps to Reproduce
On a map, add a widget of any kind
Set DYNAMIC to NO
Go back to exit the widget configuration
Open widget configuration again, there is no bullet point, although the widget continues behaving as expected
Current Result
Bullet point disappears after setting NO
Expected result
Bullet point should show the current setting
S/B: 10720998
Already fixed.
| gharchive/issue | 2016-11-24T14:33:00 | 2025-04-01T04:32:21.828143 | {
"authors": [
"ernesmb",
"michellechandra",
"noguerol"
],
"repo": "CartoDB/cartodb",
"url": "https://github.com/CartoDB/cartodb/issues/10817",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
209744804 | Changes for embed view
This is the list of pending things for the embed view. Unfortunately it requires changes over CARTO.js, Deep-Insights.js and Builder. There is a link to the design at the bottom:
[ ] Delete blue bar from embed view (deep-insights.js).
[ ] Remove blue bar map option from CARTO Builder (Builder).
[ ] Move social buttons to CARTO.js.
[ ] Create new component (A) where title, description and legend will be placed (CARTO.js).
[ ] Create new title and description components (CARTO.js).
[ ] Create title and description map options in Builder (Builder).
[ ] Make possible to show/hide legends (CARTO.js).
[ ] Move legends within component A (CARTO.js).
[ ] Delete social actions in mobile view (CARTO.js).
[ ] Delete zoom bottoms in mobile view (CARTO.js).
[ ] Create slide component for showing legend in mobile (CARTO.js).
[ ] Show in a different way the click popups in mobile (CARTO.js).
*Design: https://invis.io/HUAAC2GSV
The last point 'Show in a different way the click popups in mobile (CARTO.js).' should be evaluated. Is an awesome feature but might be a bit tricky for this phase. I'd say that we need to think deeply about the scope of this
Last update:
All resolution sizes have the same structure for the component (Title + Description + Legend).
The case when the user won't show the title or description.
Including a visual that how the legends work.
Here the final visual on InVision project:
https://invis.io/HUAAC2GSV
| gharchive/issue | 2017-02-23T12:13:09 | 2025-04-01T04:32:21.833846 | {
"authors": [
"saleiva",
"urbanphes",
"xavijam"
],
"repo": "CartoDB/cartodb",
"url": "https://github.com/CartoDB/cartodb/issues/11623",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
355619591 | Expire OAuth authorization when removing/updating the authorization for an org
Just leaving it as a notice for the future. If we remove an oauth_app_organization, we should probably invalidate all existing oauth_app_users related to that organization if the app is restricted. Similar checks if an app changes from normal to restricted, etc.
cc @alrocar in case you are interested in pursuing this
Closing in favor of https://app.clubhouse.io/cartoteam/story/83528/expire-oauth-authorization-when-removing-updating-the-authorization-for-an-org
| gharchive/issue | 2018-08-30T14:46:37 | 2025-04-01T04:32:21.836041 | {
"authors": [
"gonzaloriestra",
"javitonino"
],
"repo": "CartoDB/cartodb",
"url": "https://github.com/CartoDB/cartodb/issues/14262",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
164129391 | Super silly thing: order of coloring by value
Is there a way to reverse the order of the colors in the color scales (brewers and stuff) for coloring by value? It would be useful in the future.
In case you want to keep it simple, I'd pick light colors to low values if the field. Now fields with the low value get intense colors, which for me me in counter-intuitive. See attachment: cluster with value 0 gets the darkest color.
Sorry guys if I'm being a pain in the ass, specially these days :)
You are doing great, don't worry ;)
It's a dupe BTW :)
| gharchive/issue | 2016-07-06T17:22:40 | 2025-04-01T04:32:21.838047 | {
"authors": [
"fernando-carto",
"saleiva"
],
"repo": "CartoDB/cartodb",
"url": "https://github.com/CartoDB/cartodb/issues/8583",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
317981716 | Fix short title translations
Related issue #13828
I know, but it was in the old locale file, so I prefer to keep it :)
| gharchive/pull-request | 2018-04-26T11:09:55 | 2025-04-01T04:32:21.838874 | {
"authors": [
"elenatorro"
],
"repo": "CartoDB/cartodb",
"url": "https://github.com/CartoDB/cartodb/pull/13899",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
162453165 | 8230 importing visualizations with quotes
Fixes #8230
Backend is :green_apple: Please CR @juanignaciosl
✨ , thank you very much for taking this so quickly!
| gharchive/pull-request | 2016-06-27T13:34:56 | 2025-04-01T04:32:21.839947 | {
"authors": [
"javitonino",
"juanignaciosl"
],
"repo": "CartoDB/cartodb",
"url": "https://github.com/CartoDB/cartodb/pull/8233",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
279318341 | Support reduced style packages
Currently most of SDK binary footprint is style asset, and specifically one font (NotoSansPlus) and most of it is one specific language - Korean Hangun script, which is about 4MB extra. When we remove the font, then all CJK languages will disappear on the map, which makes big part of the world essentially nameless.
Possible quite optimal solution would be a processing script which parses planet file and gets list of used glyphs, and then creates clean font set with only really used glyphs outside of standard Latin typography.
Another solution would be to include minimal style (without CJK or just without K of it), and allow users to use custom style package. The problem here is that with this one the app footprint would be even larger. Also this custom app-provided style would not support style updates. So for this solution we'd need to add to API user control which style package they want to use: "lite" or with all languages. Also server should include both variations as different styles.
Until we dont have either solution I'm bundling full big style to SDK, so users upgrading SDK will not get sudden loss of all texts on map in big regions. Known issue is that the SDK and eventually mobile app footprint is increased by several MB.
Style is already reduced / optimized, not so needed.
| gharchive/issue | 2017-12-05T10:16:02 | 2025-04-01T04:32:21.842018 | {
"authors": [
"jaakla"
],
"repo": "CartoDB/mobile-sdk",
"url": "https://github.com/CartoDB/mobile-sdk/issues/164",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
915834479 | This triggers another on.push action
When running this inside on: push, it triggers another one to start. Although, regular git push does not cause this trigger.
At which point in the process do you mean?
It creates a temporary branch in the repository if the target branch is protected (e.g., if some CI workflows/tests are required to succeed or finish until one can push to it). This action pushes a temporary branch for which this CI can then run (and hopefully succeed) before finally merging it into the target protected branch.
The final push only triggers more CI workflows if it's part of the list:
on:
push:
branches:
- target_branch
- 'push-action/**'
All workflows are triggered as a direct cause of using a personal access token (PAT), if the GITHUB_TOKEN is used instead, no further workflows will be triggered (see https://docs.github.com/en/actions/reference/events-that-trigger-workflows).
This is necessary in order to trigger the workflows and CI for the temporary branch.
If you're using this action for an unprotected branch, then you can use GITHUB_TOKEN and no extra workflows should be triggered.
On another note, I can see that GitHub has recently chosen to include a way of manually starting a workflow through their API, which may be an option for this action to start the temporary branch workflow and avoid the final trigger when merging into the target branch (see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#manual-events).
However, I believe this solution will only ever work for a public repository. For private repositories a PAT is always needed, resulting again in the current situation.
| gharchive/issue | 2021-06-09T06:30:27 | 2025-04-01T04:32:21.849953 | {
"authors": [
"CasperWA",
"nikoloza"
],
"repo": "CasperWA/push-protected",
"url": "https://github.com/CasperWA/push-protected/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
719781770 | Clean up dependencies
Closes #183
Create a server extra that installs Voilà and ASE.
Remove unused dependencies: App Mode, JupyterLab, NumPy.
Update README accordingly and ensure the CLI returns a non-zero code and a helpful message on how to make it work if Voilà is not installed.
Missing:
[ ] Tests
Codecov Report
:exclamation: No coverage uploaded for pull request base (develop@38a6b24). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## develop #184 +/- ##
==========================================
Coverage ? 35.82%
==========================================
Files ? 17
Lines ? 2127
Branches ? 0
==========================================
Hits ? 762
Misses ? 1365
Partials ? 0
Flag
Coverage Δ
#optimade-client
35.82% <0.00%> (?)
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 38a6b24...fb72d56. Read the comment docs.
| gharchive/pull-request | 2020-10-13T01:32:30 | 2025-04-01T04:32:21.857322 | {
"authors": [
"CasperWA",
"codecov-io"
],
"repo": "CasperWA/voila-optimade-client",
"url": "https://github.com/CasperWA/voila-optimade-client/pull/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
610633354 | Widevine extract issue
I am experimenting with a S905W TV box and reinstalling Kodi and/or this plugin frequently. Over the last 24 hours I'm facing a strange issue not happening before. It occurs at first play of a video, when Widevine needs to be installed. It fails. I am guessing it has something to do with the fact that there are profiles with Greek characters on Netflix. But this was not happening before, and I have not changed anything regarding the profiles.
Here is the error produced:
2020-05-01 07:46:04.420 T:3664741232 ERROR: [plugin.video.netflix (2)] Traceback (most recent call last):
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/run_addon.py", line 182, in run
route([part for part in g.PATH.split('/') if part])
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/run_addon.py", line 33, in lazy_login_wrapper
return func(*args, **kwargs)
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/run_addon.py", line 57, in route
play(videoid=pathitems[1:])
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/common/videoid.py", line 294, in wrapper
return func(*args, **kwargs)
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/common/logging.py", line 133, in timing_wrapper
return func(*args, **kwargs)
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/navigation/player.py", line 72, in play
list_item = get_inputstream_listitem(videoid)
File "/storage/.kodi/addons/plugin.video.netflix/resources/lib/navigation/player.py", line 144, in get_inputstream_listitem
if not is_helper.check_inputstream():
File "/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/init.py", line 407, in check_inputstream
return self._check_drm()
File "/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/init.py", line 362, in _check_drm
return self.install_widevine()
File "/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/init.py", line 33, in clean_before_after
result = func(self, *args, **kwargs)
File "/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/init.py", line 223, in install_widevine
result = install_widevine_arm(backup_path())
File "/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/widevine/arm.py", line 240, in install_widevine_arm
extract_widevine_from_img(os.path.join(backup_path, arm_device['version']))
File "/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/widevine/arm.py", line 253, in extract_widevine_from_img
for root, _, files in os.walk(mnt_path()):
File "/usr/lib/python2.7/os.py", line 296, in walk
File "/usr/lib/python2.7/os.py", line 296, in walk
File "/usr/lib/python2.7/os.py", line 296, in walk
File "/usr/lib/python2.7/os.py", line 286, in walk
File "/usr/lib/python2.7/posixpath.py", line 73, in join
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 29: ordinal not in range(128)
"/storage/.kodi/addons/script.module.inputstreamhelper/lib/inputstreamhelper/widevine/arm.py"
it is clearly written that the error does not come from this add-on
Yes but the above addon was installed by yours. It is a dependency.
if the dependencies don't work, it's not a problem that needs to be solved here
Nothing i can do for you
I undestand. Thank you for your time.
| gharchive/issue | 2020-05-01T08:23:24 | 2025-04-01T04:32:21.882607 | {
"authors": [
"CastagnaIT",
"fesarlis"
],
"repo": "CastagnaIT/plugin.video.netflix",
"url": "https://github.com/CastagnaIT/plugin.video.netflix/issues/611",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1109655484 | Setup for CI builds on Github actions
closes #16
Implements build using Github Actions
Builds an Android APK
Installing the apk build from this config was successful :)
@madmas your line endings are somehow not set to 100
| gharchive/pull-request | 2022-01-20T18:54:56 | 2025-04-01T04:32:21.901824 | {
"authors": [
"HamletDRC",
"madmas"
],
"repo": "Celestial-Inc/Stylophile",
"url": "https://github.com/Celestial-Inc/Stylophile/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1745423491 | Api koppelen
Zorgen dat de API werkt en er data opgehaald kan worden
Bam!
Is done. Update plsz
Af op 11-06
| gharchive/issue | 2023-06-07T09:10:47 | 2025-04-01T04:32:21.904507 | {
"authors": [
"CelinexPutte",
"rster2002"
],
"repo": "CelinexPutte/proof-of-concept",
"url": "https://github.com/CelinexPutte/proof-of-concept/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
137306640 | Add checks needed for new version of dateutil
The new version of python-dateutil doesn't let you pass an empty string or None to the parser without it throwing an error so check for that.
@sloria
Thanks for the quick response on this. This is good to merge when the build passes.
I've squashed these changes (to make them easier to cherry-pick into master) and resolved the remaining issues with the build in https://github.com/CenterForOpenScience/osf.io/commit/381c85189f3fa6d42570d8e6c0101049be4bd1ae
| gharchive/pull-request | 2016-02-29T16:48:47 | 2025-04-01T04:32:21.923882 | {
"authors": [
"barbour-em",
"sloria"
],
"repo": "CenterForOpenScience/osf.io",
"url": "https://github.com/CenterForOpenScience/osf.io/pull/5086",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
256045369 | [Feature] Fix mithril redraws issues btn logs and tb filebrowser [OSF-8587]
Purpose
To solve an issue described in https://github.com/CenterForOpenScience/osf.io/pull/7277,
logs and tb filebrowser mounts where put in same mithril redraw.
Use m.request with background : true to prevent mithril redraws upon promise return, while
keeping part of solution in https://github.com/CenterForOpenScience/osf.io/pull/7277
Changes
Have logFeed use with m.request with background : true
Remove m.startComputation/m.endComputation pair
QA Considerations
Test this with a project that has lots of files, folders, and maybe several addons enabled. The point is to make the filebrowser take long to load. This fix should make it so the logs load independently and depending on the complexity/size of the files/folder/addons, logs may finish loading earlier.
Check if the 'new component' button is only enabled when the filebrowser on the left is done rendering.
Check if you can create new component or link projects while logs are loading.
Before/After
More of a performance fix, hard to notice/capture the diff before/after.
though, logs should render a little faster than before.
Ticket
OSF-8587
@fabmiz Could you make a version of this based on Master? Steve is interested in hotfixing it.
| gharchive/pull-request | 2017-09-07T19:31:06 | 2025-04-01T04:32:21.928458 | {
"authors": [
"brianjgeiger",
"fabmiz"
],
"repo": "CenterForOpenScience/osf.io",
"url": "https://github.com/CenterForOpenScience/osf.io/pull/7670",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
340359885 | Set is_root=True for old folders set as root_node for existing NodeSettings [PLAT-578]
Purpose
The new get_root method in the OsfStorageFolderManager relies on is_root=True being set for all root folders. This is the case for new nodes as that setting happens in the on_add method of osfstorage's NodeSettings model.
Changes
Set is_root=True for all OsfStorageFolder objects that have a fk relationship to a NodeSettings model, as those are all root folders.
QA Notes
We'll do QA on this one!
Documentation
None necessary
Side Effects
None anticipated
Ticket
https://openscience.atlassian.net/browse/PLAT-578
Tested locally on local prod backup. Took ~3m24s:
python manage.py migrate 10.47s user 1.89s system 6% cpu 3:23.93 total
| gharchive/pull-request | 2018-07-11T18:41:19 | 2025-04-01T04:32:21.932136 | {
"authors": [
"erinspace",
"sloria"
],
"repo": "CenterForOpenScience/osf.io",
"url": "https://github.com/CenterForOpenScience/osf.io/pull/8537",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
302449478 | [PLAT-218] Change log callback so we can log file views
Ticket
https://openscience.atlassian.net/browse/PLAT-218
Plat PR:
https://github.com/CenterForOpenScience/osf.io/pull/8184
Purpose
Recently we opened up the log_to_callback function to send back data on GET actions, this completes that behavior by sending the file's version info and not sending it when listing revisions.
Changes
Minor logic changes to handler.
Side effects
None that I know of.
QA Notes
Should be tested as part of PLAT-218 ticket follow notes on that PR.
Deployment Notes
None that I know of.
Coverage decreased (-0.01%) to 90.096% when pulling c3d137d06f14db875effb0b56d2dd91fd6a7a937 on Johnetordoff:log-views-properly into 5a660f888d50e38aea338baaadbff5742f96dfff on CenterForOpenScience:develop.
Merged, but the request metadata structure has been changed slightly. The metadata is now under the request_meta key and includes four fields: url, referrer, method, and user_agent. The full list of headers is no longer being returned.
| gharchive/pull-request | 2018-03-05T20:26:26 | 2025-04-01T04:32:21.936940 | {
"authors": [
"Johnetordoff",
"coveralls",
"felliott"
],
"repo": "CenterForOpenScience/waterbutler",
"url": "https://github.com/CenterForOpenScience/waterbutler/pull/324",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
891925840 | Export targets in Cmake configuration
Would be great if cesium-native export the targets in the cmake configuration so for an external application can be as easy as find_package(cesium-native) and then link against CesiumNative::CesiumNative or each individual library like CesiumNative::CesiumGeospatial.
CC: https://github.com/CesiumGS/cesium-native/issues/231
Duplicate of #231.
| gharchive/issue | 2021-05-14T13:31:43 | 2025-04-01T04:32:22.000234 | {
"authors": [
"jtorresfabra",
"kring"
],
"repo": "CesiumGS/cesium-native",
"url": "https://github.com/CesiumGS/cesium-native/issues/247",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2231715798 | Need reference implementations for cesium-native users
It could be useful to have a set of "how to get started examples" for anyone wanting to integrate cesium-native into their project for the first time.
This would be a little different from just referencing Cesium for Unreal or Cesium for Unity in that these examples would be as simple as possible to help guide the user through integration.
There could be multiple reference examples:
A C++ application integrating all of cesium-native to stream tiles
A C++ application integrating only parts of cesium-native (Ex. load Gltfs only)
A non-C++ application integrating cesium-native (C#, web assembly, etc)
Ideally, these examples would also be built by CI, to make sure any supported use cases are always working.
Feel free to chime in with other ideas as well.
This is certainly not a "reference implementation". But have created (and occasionally updated) https://github.com/javagl/cesium-cpp . This is a pure command-line application that uses cesium-native, and offers some "dummy" implementations for the TilesetExternals.
It allows things like reading all glTF sample assets with cesium-native, or loading a tileset and performing some view updates programmatically - basically "emulating" what a real runtime/renderer/application would be calling.
This is not intended to offer any form of stability. There are no maintenance guarantees. Until now, I only used it for very basic debugging and testing. But maybe someone else also finds something useful in there.
I've pointed people to vsgCs and received some positive feedback about it. Some of the things going for it:
It's a complete implementation, but the target API (Vulkan Scene Graph) is much simpler than the other targets with which I'm familiar (Unreal, Omniverse);
Some of the parts are directly useable in other contexts. In particular, its asset accessor, based on libcurl, has made its way into Cesium Omniverse and other Cesium projects (so I'm told);
It has a working example of GltfReader::loadGltf,. Working with this function allows implementors to get the basics of glTF loading to work before tackling the streaming of 3D Tiles.
vsgCs is not particularly well documented, but I think the code is generally pretty clear.
| gharchive/issue | 2024-04-08T17:14:24 | 2025-04-01T04:32:22.006467 | {
"authors": [
"csciguy8",
"javagl",
"timoore"
],
"repo": "CesiumGS/cesium-native",
"url": "https://github.com/CesiumGS/cesium-native/issues/855",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1161887718 | Experiment with unloading cache strategy
Open this PR as draft to compare different approaches for unloading tiles.
when the tileset unloads its cache, it will unload all loaded tiles up to the root since the root marks the beginning of recently used tiles in the current frame (https://github.com/CesiumGS/cesium-native/blob/cc754d439e9d43b2f3566386d5e92caafdb383a5/Cesium3DTilesSelection/src/Tileset.cpp#L1194). But is it too conservative?
for example, if a root is level 0 and a tile at level 11 is currently rendered in the current frame, that means all tiles from the root -> 11 in some tileset branches are never released. My understanding is that they are only released when the camera is zoomed out. Not sure if I understand correctly. Assume the example tileset above is using replace refinement
When loading Nearmap tileset for Manhattan and setting local cache size to 0, the unloadCachedTiles reduces the number of tiles from 279 -> 261 tiles, the number of tiles that should be rendered in this frame is 79 tiles
A different approach for unloading the cache is that we loop through loaded tiles to the end and eagerly remove tiles that are not selected to be rendered in the current frame
so for this approach, I see unloadCachedTiles reduces the number of tiles from 278 -> 78 tiles, and the number of tiles that should be rendered in this frame is also 78 tiles
The disadvantage I can see for the second approach is that since tiles are removed more eagerly compared to the original method, you can see more tiles disappear and re-appear again when moving camera around when cache size is small
but I think it works best for the first person shooter's use case where camera tends to be quite close to the detailed tiles
I'm not sure which one is good though.
actually this does wastefully remove preload ancestor and sibling and cause the gc works too much
| gharchive/pull-request | 2022-03-07T20:27:51 | 2025-04-01T04:32:22.009982 | {
"authors": [
"baothientran"
],
"repo": "CesiumGS/cesium-native",
"url": "https://github.com/CesiumGS/cesium-native/pull/454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2253931047 | Update copyright across all files
Updates the copyright at the top of all files for the range 2020-2024. Also adds it in the files where it was missing.
I just opened this PR because the inconsistency kept frustrating me. 🙃
Agreed, it is a bit frustrating when copyrights go out of date. I've worked on some projects where I solved this by just not including them anymore.
Tagging self for review and starting the arduous 236 file diff ...
Only recommendation I would have would be write an issue to consider automating this (or removing the copyright altogether?)
The reason we have a copyright header in cesium-unreal and nowhere else is because it's required by Epic's Marketplace guidelines. Or at least it was, I haven't checked lately.
| gharchive/pull-request | 2024-04-19T20:30:27 | 2025-04-01T04:32:22.011979 | {
"authors": [
"csciguy8",
"j9liu",
"kring"
],
"repo": "CesiumGS/cesium-unreal",
"url": "https://github.com/CesiumGS/cesium-unreal/pull/1398",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2146877160 | updating Camera.changed to account for changes in roll
Description
I attempted to fix the remaining issues with #9365, which I think was closed prematurely.
Current behavior: For cases where camera is not pointed straight down to surface twistLeft/Right do not cause a Camera changed event to fire.
Attempted fix: Add a similar check performed for heading to be also performed for roll. This seems to properly cause the change event to be triggered. I do not know if this is the most optimal fix, just copying what was done by others to handle this case too.
Issue number and link
#9365 Can open a new issue if appropriate
Testing plan
I added a test case that listens for the Camera changed event for twistLeft after first telling the camera to lookUp by 45deg. This test failed before my changes, and passed after my changes.
Author checklist
[x] I have submitted a Contributor License Agreement
[x] I have added my name to CONTRIBUTORS.md
[x] I have updated CHANGES.md with a short summary of my change
[x] I have added or updated unit tests to ensure consistent code coverage
[x] I have update the inline documentation, and included code examples where relevant
[x] I have performed a self-review of my code
Thanks @malaretv! I can confirm we have a CLA on file for you.
Looks good! Thanks @malaretv!
| gharchive/pull-request | 2024-02-21T14:21:22 | 2025-04-01T04:32:22.016694 | {
"authors": [
"ggetz",
"malaretv"
],
"repo": "CesiumGS/cesium",
"url": "https://github.com/CesiumGS/cesium/pull/11844",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
262852291 | 9 Greedy: Added rucksack problem algorithm
I have implemented the rucksack/Knapsack problem on Go! https://en.wikipedia.org/wiki/Knapsack_problem
If I have to change something tell me
Greetings,
duxy1996
@Duxy1996 Reference the issue number in the PR
The issue is the number 9. I added in the PR title
| gharchive/pull-request | 2017-10-04T16:32:42 | 2025-04-01T04:32:22.018721 | {
"authors": [
"Ch3ck",
"Duxy1996"
],
"repo": "Ch3ck/AlGo",
"url": "https://github.com/Ch3ck/AlGo/pull/56",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1246106218 | Lodestar release unintended version
Describe the bug
When testing #4048 , I tried lerna version without --no-git-tag-version flag and release flow was triggered unintentionally with version chore(release): v0.37.0-dev.ef9528aa59
Expected behavior
Release workflow should validate the tag name, or the branch before creating the release
Resolved with new gitflow process: https://github.com/ChainSafe/lodestar/pull/4071
| gharchive/issue | 2022-05-24T07:28:44 | 2025-04-01T04:32:22.056804 | {
"authors": [
"philknows",
"tuyennhv"
],
"repo": "ChainSafe/lodestar",
"url": "https://github.com/ChainSafe/lodestar/issues/4050",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1271329098 | Move fastify to devDependency in lodestar-api
Describe the bug
lodestar-api defines client and server in the same package. That's very convenient for DX, but users should NOT have to install a server side REST framework to use a client.
Architect the types and exports in a way that's not necessary to have fastify as a direct dependency of lodestar-api
https://github.com/ChainSafe/lodestar/blob/04155c66fb28da65650f48d3394ce9762b02f374/packages/api/package.json#L71
Oh it was introduced in this mega commit and wasn't picked up by reviews
https://github.com/ChainSafe/lodestar/commit/dfd4cdcbf5a364fa0c41aeaac2c69d8b344c0ebe#diff-069a572372e4f2574eb68db90e3d6ff4d0766296abec7b3c7b0f45b098d38fcaR72
| gharchive/issue | 2022-06-14T20:31:13 | 2025-04-01T04:32:22.059129 | {
"authors": [
"dapplion"
],
"repo": "ChainSafe/lodestar",
"url": "https://github.com/ChainSafe/lodestar/issues/4159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
898763173 | Allow to run validator with interop key range
Motivation
It is useful for local testnets to run a validator with interop keys quickly
Description
Add CLI arg to use a contiguous inclusive range of interop keys
--interopIndexes 17..32
file conflict
| gharchive/pull-request | 2021-05-22T09:27:15 | 2025-04-01T04:32:22.060596 | {
"authors": [
"dapplion",
"wemeetagain"
],
"repo": "ChainSafe/lodestar",
"url": "https://github.com/ChainSafe/lodestar/pull/2548",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2116499692 | cl.step can't feedback?
cl.step can't feedback?
Set disable_feedback to False.
cl.step can't feedback?
hi @lw3259111 , 我在学习chainlit应用,想请教下您是通过Literal AI 实现的吗?
@lw3259111 你好,我在学习chainlit,想问下您是怎么实现 feedback功能的?是通过literalai吗?还是有其他方式实现
其他方式
---- Replied Message ----
| From | @.> |
| Date | 02/06/2024 13:51 |
| To | Chainlit/chainlit @.> |
| Cc | lw3259111 @.>,
Mention @.> |
| Subject | Re: [Chainlit/chainlit] cl.step can't feedback? (Issue #723) |
@lw3259111 您好,我在学习chainlit,想问下您是怎么实现 feedback功能的?是通过literalai吗?还是有其他方式实现
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
| gharchive/issue | 2024-02-03T14:07:42 | 2025-04-01T04:32:22.066982 | {
"authors": [
"lw3259111",
"samosun",
"willydouhard"
],
"repo": "Chainlit/chainlit",
"url": "https://github.com/Chainlit/chainlit/issues/723",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1624871366 | 各位大佬好 如果是想绑定域名的话应该怎么配置啊
各位大佬好 如果是想绑定域名的话应该怎么配置啊
我是这样操作的
在域名服务商那里,先把自己的域名解析到你服务器的公网ip,还有主机名
然后在自己服务器上搞个nginx,设置代理规则,把 主机名.域名 指向本地的端口,公网ip:3002
1.先申请个服务器(国外最好)
2.申请一个域名,并绑定到你的服务器IP
3.安装个Nginx
4.部署后端服务,建议用PM2持续部署
5.打包前端,在Nginx上配置打包好的文件的路径 dist
nginx 参考
server {
listen 80;
location / {
root /home/html/dist; # 站点根目录 设置自己的路径
index index.html index.htm;
#try_files index.html =404;
try_files $uri $uri/ /index.html;
}
#访问到服务器接口转接
location /api {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3002;
}
}
大家都好热心。。。这里还是建议楼主直接问 chatgpt 更效率
用railway部署的话就很方便,在setting→Environment→Domains 里面,直接创建一个域名就可以,这个域名是railway指派的,很长比较难记。然后你可以创建一个自定义域名,通过CNAME指向刚才railway创建的难记域名,就可以通过自己的域名访问了,记得用上https
我这个倒是简单,用群晖的域名和群晖自己的反代直接就域名加端口就可以出去了,省去了好多麻烦的事情
用railway部署的话就很方便,在setting→Environment→Domains 里面,直接创建一个域名就可以,这个域名是railway指派的,很长比较难记。然后你可以创建一个自定义域名,通过CNAME指向刚才railway创建的难记域名,就可以通过自己的域名访问了,记得用上https
我用railway部署后,访问域名显示这个,是不是哪一步做错了
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection failure, transport failure reason: delayed connect error: 111
用railway部署的话就很方便,在setting→Environment→Domains 里面,直接创建一个域名就可以,这个域名是railway指派的,很长比较难记。然后你可以创建一个自定义域名,通过CNAME指向刚才railway创建的难记域名,就可以通过自己的域名访问了,记得用上https
部署在railway上,setting里没有domain这个选项是怎么回事呀
请问能看下群晖的配置设置吗,我设置了总是不行
没事了,我把网络设置为与docker一样,就可以正常回答了
“站点根目录 设置自己的路径 ” 指的是代码的哪个目录呀?我的docker 命令启动是在 chatgpt-web 目录下
1.先申请个服务器(国外最好) 2.申请一个域名,并绑定到你的服务器IP 3.安装个Nginx 4.部署后端服务,建议用PM2持续部署 5.打包前端,在Nginx上配置打包好的文件的路径 dist
nginx 参考 server { listen 80; location / { root /home/html/dist; # 站点根目录 设置自己的路径 index index.html index.htm; #try_files index.html =404; try_files $uri $uri/ /index.html; } #访问到服务器接口转接 location /api { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header REMOTE-HOST $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:3002; } }
| gharchive/issue | 2023-03-15T06:54:30 | 2025-04-01T04:32:22.099567 | {
"authors": [
"Alaskdream",
"IrvingYoung",
"JarvanDing",
"haihaimx",
"jikeppq163",
"jykSS",
"luuziiyoo",
"nagaki09",
"tianstephanie",
"woodchen-ink"
],
"repo": "Chanzhaoyu/chatgpt-web",
"url": "https://github.com/Chanzhaoyu/chatgpt-web/issues/603",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2336404106 | Cow: Blacklist assignmenthelps.org
Cow requests the blacklist of the website assignmenthelps\.org. See the MS search here and the Stack Exchange search in text, in URLs, and in code.
assignmenthelps\.org has been seen in 6 true positives, 0 false positives, and 0 NAAs.
| gharchive/pull-request | 2024-06-05T17:00:03 | 2025-04-01T04:32:22.115716 | {
"authors": [
"SmokeDetector",
"metasmoke"
],
"repo": "Charcoal-SE/SmokeDetector",
"url": "https://github.com/Charcoal-SE/SmokeDetector/pull/11425",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2367747203 | Cow: Watch thoroughbredlabs.co.uk
Cow requests the watch of the watch_keyword thoroughbredlabs\.co\.uk. See the MS search here and the Stack Exchange search in text, in URLs, and in code.
thoroughbredlabs\.co\.uk has been seen in 1 true positive, 0 false positives, and 0 NAAs.
Approved by Jeff Schaller in Charcoal HQ
| gharchive/pull-request | 2024-06-22T11:12:32 | 2025-04-01T04:32:22.118856 | {
"authors": [
"SmokeDetector",
"metasmoke"
],
"repo": "Charcoal-SE/SmokeDetector",
"url": "https://github.com/Charcoal-SE/SmokeDetector/pull/11663",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2453816943 | General Grievance: Watch generationit.nl
General Grievance requests the watch of the watch_keyword generationit\.nl. See the MS search here and the Stack Exchange search in text, in URLs, and in code.
generationit\.nl has been seen in 0 true positives, 0 false positives, and 0 NAAs.
Approved by Spevacus in Charcoal HQ
| gharchive/pull-request | 2024-08-07T15:51:40 | 2025-04-01T04:32:22.121969 | {
"authors": [
"SmokeDetector",
"metasmoke"
],
"repo": "Charcoal-SE/SmokeDetector",
"url": "https://github.com/Charcoal-SE/SmokeDetector/pull/12554",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
942462874 | Ian Campbell: Watch curadebt.com
Ian Campbell requests the watch of the watch_keyword curadebt\.com. See the MS search here and the Stack Exchange search in text, in URLs, and in code.
curadebt\.com has been seen in 0 true positives, 0 false positives, and 0 NAAs.
Approved by Makyen in Charcoal HQ
| gharchive/pull-request | 2021-07-12T21:25:18 | 2025-04-01T04:32:22.125357 | {
"authors": [
"SmokeDetector",
"metasmoke"
],
"repo": "Charcoal-SE/SmokeDetector",
"url": "https://github.com/Charcoal-SE/SmokeDetector/pull/6585",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.