added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:25.846956
| 2024-06-24T18:14:17
|
2370827203
|
{
"authors": [
"SophieGuo410",
"codecov-commenter"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7880",
"repo": "linkedin/ambry",
"url": "https://github.com/linkedin/ambry/pull/2807"
}
|
gharchive/pull-request
|
Support nextMarker and nextContinuationToken
Support nextMarker for listObject and nextContinuationToken for listObjectV2
Codecov Report
Attention: Patch coverage is 48.64865% with 19 lines in your changes missing coverage. Please review.
Project coverage is 18.65%. Comparing base (52ba813) to head (fe3ca37).
Report is 33 commits behind head on master.
Files
Patch %
Lines
...com/github/ambry/frontend/s3/S3MessagePayload.java
52.38%
10 Missing :warning:
...va/com/github/ambry/frontend/s3/S3ListHandler.java
43.75%
7 Missing and 2 partials :warning:
:exclamation: There is a different number of reports uploaded between BASE (52ba813) and HEAD (fe3ca37). Click for more details.
HEAD has 2 uploads less than BASE
| Flag | BASE (52ba813) | HEAD (fe3ca37) |
|------|------|------|
||3|1|
Additional details and impacted files
@@ Coverage Diff @@
## master #2807 +/- ##
=============================================
- Coverage 64.24% 18.65% -45.60%
+ Complexity 10398 2919 -7479
=============================================
Files 840 842 +2
Lines 71755 72314 +559
Branches 8611 8703 +92
=============================================
- Hits 46099 13489 -32610
- Misses 23004 57585 +34581
+ Partials 2652 1240 -1412
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T06:39:25.862432
| 2018-04-26T22:18:56
|
318216592
|
{
"authors": [
"codecov-io",
"npawar"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7881",
"repo": "linkedin/pinot",
"url": "https://github.com/linkedin/pinot/pull/2762"
}
|
gharchive/pull-request
|
Cleanup deprecated code from PinotLLCRealtimeSegmentManager and ValidationManager
As part of https://github.com/linkedin/pinot/pull/2721 we refactored the PinotLLCRealtimeSegmentManager to not depend on znode for stream partition assignment. A lot of methods were rewritten, and older ones deprecated. This PR attempts to clean up all the deprecated and unused methods
Codecov Report
Merging #2762 into master will increase coverage by 11.48%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #2762 +/- ##
==========================================
+ Coverage 57.51% 69% +11.48%
==========================================
Files 876 876
Lines 42319 41993 -326
Branches 5754 5708 -46
==========================================
+ Hits 24339 28976 +4637
+ Misses 16288 11144 -5144
- Partials 1692 1873 +181
Impacted Files
Coverage Δ
.../core/realtime/PinotLLCRealtimeSegmentManager.java
55.6% <ø> (+49.79%)
:arrow_up:
...pinot/controller/validation/ValidationManager.java
84.39% <ø> (+48.32%)
:arrow_up:
...not/transport/scattergather/ScatterGatherImpl.java
55.69% <0%> (+0.63%)
:arrow_up:
...ore/realtime/impl/RealtimeSegmentStatsHistory.java
80.95% <0%> (+0.68%)
:arrow_up:
.../pinot/core/segment/index/SegmentMetadataImpl.java
81.56% <0%> (+0.7%)
:arrow_up:
...t/creator/impl/SegmentIndexCreationDriverImpl.java
88.43% <0%> (+0.74%)
:arrow_up:
...r/transform/function/ValueInTransformFunction.java
39.2% <0%> (+0.8%)
:arrow_up:
.../helix/core/realtime/SegmentCompletionManager.java
69.54% <0%> (+0.9%)
:arrow_up:
...din/pinot/core/realtime/stream/StreamMetadata.java
67.88% <0%> (+0.91%)
:arrow_up:
...e/io/writer/impl/MutableOffHeapByteArrayStore.java
86.59% <0%> (+1.03%)
:arrow_up:
... and 272 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2d22966...84e50a3. Read the comment docs.
|
2025-04-01T06:39:25.904660
| 2020-05-11T15:01:52
|
615954940
|
{
"authors": [
"farshidtz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7882",
"repo": "linksmart/thing-directory",
"url": "https://github.com/linksmart/thing-directory/issues/9"
}
|
gharchive/issue
|
Set DNS-SD service name/type for the directory
A service type for the directory service is not yet registered at IANA. But it is worth some improvements in our implementation.
Currently the the type is set to: _linksmart-td._tcp
Other works related to this:
https://github.com/w3c/wot-discovery/tree/18820a3f31f191f3e3689158672decf6c906bbf6/prior-work/fujitsu
https://github.com/w3c/wot-discovery/issues/5
Proposal:
<instance-name>._directory._sub._wot._tcp where instance is configurable defaulting to linksmart. The instance name should be made unique in each environment.
The <Instance> portion of the Service Instance Name is a user-
friendly name consisting of arbitrary Net-Unicode text [RFC5198]. It
MUST NOT contain ASCII control characters (byte values 0x00-0x1F and
0x7F) [RFC20] but otherwise is allowed to contain any characters,
without restriction, including spaces, uppercase, lowercase,
punctuation -- including dots -- accented characters, non-Roman text,
and anything else that may be represented using Net-Unicode. For
discussion of why the <Instance> name should be a user-visible, user-
friendly name rather than an invisible machine-generated opaque
identifier, see Appendix C, "What You See Is What You Get".
The <Instance> portion of the name of a service being offered on the
network SHOULD be configurable by the user setting up the service, so
that he or she may give it an informative name. However, the device
or service SHOULD NOT require the user to configure a name before it
can be used. A sensible choice of default name can in many cases
allow the device or service to be accessed without any manual
configuration at all. The default name should be short and
descriptive, and SHOULD NOT include the device's Media Access Control
(MAC) address, serial number, or any similar incomprehensible
hexadecimal string in an attempt to make the name globally unique.
https://tools.ietf.org/html/rfc6763#section-4.1.1
When a DNS-SD service is advertised using Multicast DNS [RFC6762], if
there is already another service of the same type advertising with
the same name then automatic name conflict resolution will occur. As
described in the Multicast DNS specification [RFC6762], upon
detecting a conflict, the service should:
1. Automatically select a new name (typically by appending or
incrementing a digit at the end of the name),
2. Try advertising with the new name, and
3. Upon success, record the new name in persistent storage.
This renaming behavior is very important, because it is key to
providing user-friendly instance names in the out-of-the-box factory-
default configuration.
https://tools.ietf.org/html/rfc6763#appendix-D
Service registration with subtype fails using several tested clients.
CLI registration:
macOS:
$ dns-sd -R "thing directory" _directory._sub._wot._tcp local. 8081
Registering Service thing directory._directory._sub._wot._tcp.local. port 8081
DNSService call failed -65540
Debian:
$ avahi-publish -s "thing directory" "_directory._sub._wot._tcp" 8081
Failed to add service: Invalid service type
Implemented with _wot._tcp type and _directory subtype. Instance name is configurable.
|
2025-04-01T06:39:26.040120
| 2019-09-16T17:54:50
|
494192031
|
{
"authors": [
"hzoppetti",
"leslitagordita"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7883",
"repo": "linode/linode-api-docs",
"url": "https://github.com/linode/linode-api-docs/pull/126"
}
|
gharchive/pull-request
|
[Do Not Merge] LKE Beta Endpoints
Adds LKE Beta endpoints:
/lke/clusters
/lke/clusters/{clusterId}
/lke/clusters/{clusterId}/pools
/lke/clusters/{clusterId}/pools/{poolId}
/lke/clusters/{clusterId}/kubeconfig
/lke/versions
/lke/versions/{version}
Note: This work was started by @asauber and @jfrederickson in bits repo. This adds latest updates to that worki.
please add:
lke:read_only
lke:read_write
to the oauth schema section and to the front information section
A general note for beta - LKE is available in us-central and with Kubernetes version 1.16, these should be updated in the examples.
Additionally we want to add a link to the beta sign up page with the beta note.
|
2025-04-01T06:39:26.048680
| 2024-04-10T20:23:00
|
2236367499
|
{
"authors": [
"abailly-akamai",
"carrillo-erik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7884",
"repo": "linode/manager",
"url": "https://github.com/linode/manager/pull/10366"
}
|
gharchive/pull-request
|
upcoming: [M3-7972] - Invalidate PG queries on Linode create/delete
Description 📝
Now that the POST & DELETE linode/instances Alpha endpoint have been updated to work with placement groups, we need to invalidate the related PG if:
on POST (create linode) we assign a placement group
on DELETE (delete linode) we delete a linode assigned to a placement group
Changes 🔄
Invalidate PG queries on Linode create & delete mutations
Preview 📷
Create Linode (and assign to PG)
Delete Linode (and unassign from PG)
How to test 🧪
Prerequisites
Using alpha environment and having the "Placement Group" feature flag enabled, either:
use your account (need the placement-group customer tag
use the pg-user-1 (see creds in 1Password vault)
Have at least one Placement Group created
Verification steps
See the video above:
Create a linode and assign to a PG: confirm UI updates accordingly in the placement group section
Delete a linode that has a linode assigned: confirm UI updates accordingly in the placement group section
As an Author I have considered 🤔
Check all that apply
[ ] 👀 Doing a self review
[ ] ❔ Our contribution guidelines
[x] 🤏 Splitting feature into small PRs
[x] ➕ Adding a changeset
[ ] 🧪 Providing/Improving test coverage
[ ] 🔐 Removing all sensitive information from the code and PR description
[ ] 🚩 Using a feature flag to protect the release
[x] 👣 Providing comprehensive reproduction steps
[ ] 📑 Providing or updating our documentation
[ ] 🕛 Scheduling a pair reviewing session
[ ] 📱 Providing mobile support
[ ] ♿ Providing accessibility support
I was able to verify that create and delete/unassign operations worked as expected by the changes. The linode count updated in the UI accordingly and did not observe any regressions.
|
2025-04-01T06:39:26.054593
| 2023-01-05T16:48:40
|
1521050476
|
{
"authors": [
"cpathipa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7885",
"repo": "linode/manager",
"url": "https://github.com/linode/manager/pull/8689"
}
|
gharchive/pull-request
|
M3-6044: Create Button "Create Using Command Line" and show modal
Description 📝
This story is part of feature API-CLI.
Team, guide me incase I miss setting any process oriented things before kicking off feature development. (feature flag, etc..)
Note: ga events are in my radar, if need following PR's will cover that.
What does this PR do?
Shows API-CLI awareness modal upon clicking "Create Create Using Command Line"
Preview 📷
How to test 🧪
Navigate to create Linode page
Scroll to bottom and click ""Create Create Using Command Line"" button.
Should show the modal.
Work in progress...
How do I run relevant unit or e2e tests?
yarn test ApiAwarenessModal
As a thought, if the intent is to bring awareness about the command line options to other entities as well in the future, maybe ApiAwarenessModal can be made a little more generic now by passing the modal copy/contents in as a child.
It'd look something like:
<Dialog>
{children}
<ActionsPanel>
...
</ActionsPanel>
</Dialog>
the JSX for this specific modal would be defined in LinodeCreate.tsx and then passed as a children/render prop to <ApiAwarenessModal />.
If we take this approach, the component and its test should be moved from the /LinodesCreate directory too
Good call @dwiley-akamai! That was one of the reason for decoupling ApiAwarenessModal from LinodeCreate in this iteration. But, we could definitely make it as more generic considering future wireframes it will be more clearer to convert ApiAwarenessModal as reusable.
|
2025-04-01T06:39:26.069964
| 2020-04-24T12:41:26
|
606284281
|
{
"authors": [
"Valentinkvn",
"grassjelly"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7886",
"repo": "linorobot/linorobot",
"url": "https://github.com/linorobot/linorobot/issues/41"
}
|
gharchive/issue
|
Scanse Sweep compatibility
Hi,
I'm using linorobot with Jetson Nano (Ubuntu 18.04), Arduino Mega and Sweep Lidar.
I configured the linobase module and the minimal.launch is working fine.
In order to be able to use Sweep Lidar I added it's package into the linorobot directory and catkin_make it. After that, I changed the linorobot launch files to run the sweep lidar launch file.
When I launch the bringup.launch file, I receive those error.
My question is: What else would need to be done in order to use the Sweep Lidar?
started roslaunch server http://<IP_ADDRESS>:34635/
SUMMARY
========
PARAMETERS
* /apply_calib/calib_file: /home/jetson/lino...
* /apply_calib/calibrate_gyros: True
* /ekf_localization/base_link_frame: base_footprint
* /ekf_localization/diagnostics_agg: True
* /ekf_localization/frequency: 50
* /ekf_localization/imu0: /imu/data
* /ekf_localization/imu0_config: [False, False, Fa...
* /ekf_localization/imu0_differential: True
* /ekf_localization/imu0_relative: True
* /ekf_localization/odom0: /raw_odom
* /ekf_localization/odom0_config: [False, False, Fa...
* /ekf_localization/odom0_differential: True
* /ekf_localization/odom0_relative: False
* /ekf_localization/odom_frame: odom
* /ekf_localization/two_d_mode: True
* /ekf_localization/world_frame: odom
* /imu_filter_madgwick/fixed_frame: base_footprint
* /imu_filter_madgwick/orientation_stddev: 0.05
* /imu_filter_madgwick/publish_tf: False
* /imu_filter_madgwick/use_mag: True
* /imu_filter_madgwick/use_magnetic_field_msg: True
* /imu_filter_madgwick/world_frame: enu
* /pointcloud_to_laserscan/angle_increment: 0.0174533
* /pointcloud_to_laserscan/angle_max: 3.14
* /pointcloud_to_laserscan/angle_min: -3.14
* /pointcloud_to_laserscan/concurrency_level: 1
* /pointcloud_to_laserscan/max_height: 1.0
* /pointcloud_to_laserscan/min_height: -1.0
* /pointcloud_to_laserscan/range_max: 40.0
* /pointcloud_to_laserscan/range_min: 0.0
* /pointcloud_to_laserscan/scan_time: 0.1
* /pointcloud_to_laserscan/target_frame: laser
* /pointcloud_to_laserscan/transform_tolerance: 0.001
* /pointcloud_to_laserscan/use_inf: True
* /rosdistro: melodic
* /rosserial_lino/baud: 57600
* /rosserial_lino/port: /dev/linobase
* /rosversion: 1.14.5
* /sweep_node/frame_id: laser
* /sweep_node/serial_baudrate: 115200
* /sweep_node/serial_port: /dev/linolidar
NODES
/
apply_calib (imu_calib/apply_calib)
base_footprint_to_base_link (tf2_ros/static_transform_publisher)
base_footprint_to_imu_link (tf2_ros/static_transform_publisher)
base_link_to_laser (tf2_ros/static_transform_publisher)
ekf_localization (robot_localization/ekf_localization_node)
imu_filter_madgwick (imu_filter_madgwick/imu_filter_node)
lino_base_node (linorobot/lino_base_node)
pointcloud_to_laserscan (pointcloud_to_laserscan/pointcloud_to_laserscan_node)
rosserial_lino (rosserial_python/serial_node.py)
sweep_node (sweep_ros/sweep_node)
auto-starting new master
process[master]: started with pid [26933]
ROS_MASTER_URI=http://<IP_ADDRESS>:11311
setting /run_id to 11079158-8627-11ea-ad52-12ed0dd4ce4d
process[rosout-1]: started with pid [26944]
started core service [/rosout]
process[rosserial_lino-2]: started with pid [26951]
process[apply_calib-3]: started with pid [26952]
process[imu_filter_madgwick-4]: started with pid [26953]
process[base_footprint_to_imu_link-5]: started with pid [26954]
[ INFO] [1587731298.663476387]: Starting ImuFilter
[ INFO] [1587731298.677961559]: Using dt computed from message headers
[ INFO] [1587731298.724266254]: Imu filter gain set to 0.100000
[ INFO] [1587731298.724941426]: Gyro drift bias set to 0.000000
[ INFO] [1587731298.725399145]: Magnetometer bias values: 0.000000 0.000000 0.000000
process[lino_base_node-6]: started with pid [26965]
process[base_footprint_to_base_link-7]: started with pid [26971]
process[ekf_localization-8]: started with pid [26972]
process[sweep_node-9]: started with pid [26983]
process[pointcloud_to_laserscan-10]: started with pid [26985]
process[base_link_to_laser-11]: started with pid [26987]
[ WARN] [1587731299.346980673]: Both imu0_differential and imu0_relative were set to true. Using differential mode.
[INFO] [1587731300.024167]: ROS Serial Python Node
[INFO] [1587731300.042407]: Connecting to /dev/linobase at 57600 baud
[INFO] [1587731302.160059]: Requesting topics...
Error: invalid response header checksum
[INFO] [1587731302.361959]: Note: publish buffer size is 512 bytes
[INFO] [1587731302.367370]: Setup publisher on raw_imu [lino_msgs/Imu]
[INFO] [1587731302.380240]: Note: subscribe buffer size is 512 bytes
[INFO] [1587731302.384288]: Setup subscriber on pid [lino_msgs/PID]
[INFO] [1587731302.396869]: Setup subscriber on cmd_vel [geometry_msgs/Twist]
[INFO] [1587731302.403159]: LINOBASE CONNECTED
[ERROR] [1587731302.409229]: Tried to publish before configured, topic id 125
[INFO] [1587731302.413351]: Requesting topics...
[ERROR] [1587731302.429750]: Tried to publish before configured, topic id 125
[INFO] [1587731302.433974]: Requesting topics...
[sweep_node-9] process has finished cleanly
log file: /home/jetson/.ros/log/11079158-8627-11ea-ad52-12ed0dd4ce4d/sweep_node-9*.log
[INFO] [1587731302.457929]: Setup publisher on raw_imu [lino_msgs/Imu]
[ERROR] [1587731302.488946]: Tried to publish before configured, topic id 125
[INFO] [1587731302.493745]: Requesting topics...
[INFO] [1587731302.538571]: Setup publisher on raw_vel [lino_msgs/Velocities]
[INFO] [1587731302.549725]: Setup publisher on raw_imu [lino_msgs/Imu]
[INFO] [1587731302.622646]: Setup publisher on raw_vel [lino_msgs/Velocities]
[INFO] [1587731302.638786]: Setup publisher on raw_imu [lino_msgs/Imu]
[ INFO] [1587731302.693795733]: Calibrating gyros; do not move the IMU
[ WARN] [1587731308.828438073]: Still waiting for data on topics /imu/data_raw and /imu/mag...
[ INFO] [1587731310.355790401]: Gyro calibration complete! (bias = [-0.056, 0.037, -0.012])
[ INFO] [1587731310.508411130]: First pair of IMU and magnetometer messages received.
Hi,
Can you share your launch files please? Thanks
Hi,
Sorry for late response. I configured the environment as you said but the error still occurs.
These are the launch files.
../linorobot/launch/include/laser.launch
<launch>
<!-- Run Linorobot compatible laser drivers. Takes reference from env var LINOLIDAR. ie. export LINOLIDAR=xv11 -->
<include file="$(find linorobot)/launch/include/lidar/sweep.launch" />
<!-- Publish static transform of the laser. Define your sensor offset here -->
<node pkg="tf2_ros" type="static_transform_publisher" name="base_link_to_laser" args="0.065 0 0.098 0 0 0 /base_link /laser"/>
</launch>
../linorobot/launch/include/lidar/sweep.launch (here I added the conversion between pc2 and laserscan)
<launch>
<!-- run sweep_node node -->
<node name="sweep_node" pkg="sweep_ros" type="sweep_node" output="screen">
<param name="serial_port" type="string" value="/dev/linolidar"/>
<param name="serial_baudrate" type="int" value="115200"/>
<param name="frame_id" type="string" value="laser"/>
</node>
<!-- run pointcloud_to_laserscan node -->
<node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">
<remap from="cloud_in" to="pc2"/>
<rosparam>
target_frame: laser # Leave disabled to output scan in pointcloud frame
transform_tolerance: 0.001
min_height: -1.0
max_height: 1.0
angle_min: -3.14 # -M_PI/2
angle_max: 3.14 # M_PI/2
angle_increment: 0.0174533 # M_PI/360.0
scan_time: 0.1
range_min: 0.0
range_max: 40.0
use_inf: true
# Concurrency level, affects number of pointclouds queued for processing and number of threads used
# 0 : Detect number of cores
# 1 : Single threaded
# 2->inf : Parallelism level
concurrency_level: 1
</rosparam>
</node>
</launch>
Your launch files looks good. Just omit the pointcloud_to_laserscan.
Also make sure that sweep publishes that data in "laser" frame. Otherwise, you have to rename "laser" in static_transform_publisher to the correct frame the LIDAR is using.
The problem was not the lidar itself, it worked well, but the udev rules that i've manually created. I observed that each time when I plugged in a new lino device (LiDAR / Arduino Mega) each device was bound on the same ttyUSB* port.
So, I followed the https://github.com/linorobot/linorobot/issues/31#issuecomment-602075774 instructions to configure the libgudev on my ubuntu 18.04, and after the port configurations provided by the lino_udev script, I still had to manually configure the port for Arduino Mega.
But it works now, thank you!
|
2025-04-01T06:39:26.099861
| 2018-02-13T18:49:12
|
296845869
|
{
"authors": [
"jonas-schulze",
"jsargiot"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7887",
"repo": "lins05/slackbot",
"url": "https://github.com/lins05/slackbot/pull/172"
}
|
gharchive/pull-request
|
Update default threading behavior
If a message was sent in a thread, answer in a thread per default.
Hey! Looks good, how about adding a test for this? That would help merging it faster. Thanks!
Hey @jsargiot, thanks for the response. I won't make it to create a custom testing instance of Slack nor configure Travis. It would be way easier for me (and everybody else who wants to contribute), if you could setup your Travis to run all the tests. If you are concerned about user that are not that familiar with git and that add several commits to fix test errors, you could automatically squash all commits of a PR.
Have you read https://docs.travis-ci.com/user/pull-requests/?
|
2025-04-01T06:39:26.127775
| 2024-01-11T22:30:53
|
2077713607
|
{
"authors": [
"fila43",
"richm"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7888",
"repo": "linux-system-roles/postgresql",
"url": "https://github.com/linux-system-roles/postgresql/pull/72"
}
|
gharchive/pull-request
|
fix: Enable PostgreSQL stream selection for c9s and RHEL9
c9s/RHEL9 provides PostgreSQL 13 as the default system version as a classic RPM package. Alternative versions are provided as modular content. So, it requires a different installation procedure.
Issue Tracker Tickets (Jira or BZ if any): RHEL-5274
[citest]
lgtm - I can confirm that using postgresql_version: "16" correctly installs version 16 on centos-9.
Once the ci tests pass, we can merge
|
2025-04-01T06:39:26.144587
| 2022-09-08T15:24:43
|
1366562168
|
{
"authors": [
"codecov-commenter",
"mahmednabil109"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7889",
"repo": "linuxboot/contest",
"url": "https://github.com/linuxboot/contest/pull/151"
}
|
gharchive/pull-request
|
Increase healthcheck retries
Signed-off-by: Mohamed Abokammer<EMAIL_ADDRESS>
Codecov Report
Merging #151 (b815f94) into main (a091455) will increase coverage by 0.00%.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #151 +/- ##
=======================================
Coverage 63.18% 63.18%
=======================================
Files 165 165
Lines 10383 10383
=======================================
+ Hits 6560 6561 +1
+ Misses 3096 3094 -2
- Partials 727 728 +1
Flag
Coverage Δ
e2e
48.95% <ø> (+0.02%)
:arrow_up:
integration
54.39% <ø> (+0.05%)
:arrow_up:
unittests
48.89% <ø> (-0.08%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
pkg/runner/step_runner.go
88.84% <0.00%> (-0.40%)
:arrow_down:
pkg/jobmanager/jobmanager.go
78.02% <0.00%> (+1.09%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
2025-04-01T06:39:26.238079
| 2022-10-08T10:40:39
|
1401905076
|
{
"authors": [
"AMIR34A",
"kodsu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7890",
"repo": "linvi/tweetinvi",
"url": "https://github.com/linvi/tweetinvi/issues/1189"
}
|
gharchive/issue
|
New Twitter Update : Tweet with Multi Media
Hello
In the new update, people can share a tweet with multi media(For example photo, video and gif);
Tweets.GetTweetAsync gives the first media in tweet and doesn't show another media in Media Property.
Can I fix it? or you should correct it?
Thanks a lot.
Please see #1198
Please see #1198
Thanks;but there isn's Variants property in the TweetsV2.GetTweetAsync method.
How can I fix it?
Thank you.
My pull request hasn't been merged yet.
What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
My pull request hasn't been merged yet.
What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
Ok.
Thanks🙏🏼
My pull request hasn't been merged yet.
What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
Hello
I want to know, can I have the dll that you fix it before merging pull request?
Thank you
My pull request hasn't been merged yet. What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
I cloned the repo and changed the files(your commits), but the variates is null;
Do you know how to fix it?
Thanks.
|
2025-04-01T06:39:26.239875
| 2020-10-08T23:35:56
|
717739066
|
{
"authors": [
"billgeo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7891",
"repo": "linz/geospatial-data-lake",
"url": "https://github.com/linz/geospatial-data-lake/issues/31"
}
|
gharchive/issue
|
Make Geospatial Data Lake repo public
There are organisations outside of LINZ that are interested in what we are doing. We should make the repo public as soon as possible. I think it's fine to do this while it's a work in progress as long as we indicate that somehow.
Tasks
[x] LGTM
[x] check source code for non public content
[x] open and close tickets
Thanks @SPlanzer. Closing this issue as done.
|
2025-04-01T06:39:26.244376
| 2024-01-30T13:02:23
|
2107814948
|
{
"authors": [
"codex-krcg",
"lionel-panhaleux"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7892",
"repo": "lionel-panhaleux/krcg",
"url": "https://github.com/lionel-panhaleux/krcg/issues/768"
}
|
gharchive/issue
|
Theft of Vitae
text: If both combatants strike with Theft of Vitae while one of the vampires are at 0 blood, no blood is stolen from the empty vampire (but blood would still move to the empty vampire).
link: https://groups.google.com/g/rec.games.trading-cards.jyhad/c/BHeGvhd4yEA/m/SdKih5fV34wJ
Keeping it for after refactoring, will ne to be applied on all blood stealing cards:
Form of the Cobra [pro]
Theft of Vitae
Donnybrook [ser]
Call the Lamprey
Tongue of the Serpent
Veiled Sight [CHI]
Hunger of Marduk
Drain Essence
Diversion [tha]
Absorb the mind [myt], [MYT]
Kraken's Kiss [VIC]
200078|Anastasz di Zagreb (G3)
200528|Goratrix (G2)
200976|Menele (G3 ADV) [MERGED]
201345|Tariq, The Silent (G2 ADV)
201517|Lord Leopold Valdemar (G5)
|
2025-04-01T06:39:26.298639
| 2024-04-05T10:53:21
|
2227706954
|
{
"authors": [
"filipelautert",
"mpvvliet",
"rberezen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7893",
"repo": "liquibase/liquibase",
"url": "https://github.com/liquibase/liquibase/pull/5774"
}
|
gharchive/pull-request
|
Prevent spurious SET SEARCH_PATH SQL statements for Postgres during update-sql command. Fixes #5316
Impact
[X] Bug fix (non-breaking change which fixes expected existing functionality)
[ ] Enhancement/New feature (adds functionality without impacting existing logic)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Description
Prevent resetting the search path in Postgres after a rollback if we are running in a mode that does not update the database to avoid spurious SET SEARCH_PATH SQL statements.
The logic in DatabaseUtils.initializeDatabase for Postgres expects changes to the SEARCH_PATH to be persisted to detect when the SEARCH_PATH is already correct(ed). However in update-sql mode these changes are not executed, causing the detection mechanism to fail and resulting in extra SET SEARCH_PATH statements.
Things to be aware of
I'm aware of the other change in review that will fix this issue, but that is a bigger change and might take a long time to be merged. This could be a quick win.
Things to worry about
Additional Context
Hi @mpvvliet ! Your PR handles the same as https://github.com/liquibase/liquibase/pull/5444 and also fixes https://github.com/liquibase/liquibase/issues/5316 just aiming at update-sql and does not modifies the user database settings using alter database session. What do you think about the solution proposed in the other PR?
@filipelautert I like the solution in the PR #5444 because it avoids the need to re-apply the SEARCH_PATH changes. However since that PR is bigger and seems stuck, I proposed this smaller one.
Happy to close this one if the other one has a shot of getting finalised soon.
AS PR #5444 still pending some tests let's move this one ahead, and if it gets merged we can revert this one here. Thanks @mpvvliet !
@filipelautert @MalloD12 should be merged functional test fix with https://github.com/liquibase/liquibase-pro-tests/pull/1445
|
2025-04-01T06:39:26.300284
| 2024-04-23T14:20:31
|
2259019304
|
{
"authors": [
"zanerock"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7894",
"repo": "liquid-labs/command-line-documentation",
"url": "https://github.com/liquid-labs/command-line-documentation/issues/9"
}
|
gharchive/issue
|
Simplify command titles to just the command (rather than the whole signature)
Overview
Right now we have command seciton headers like 'executable command ', etc. This is pretty wordy and we want to swap that so the section header is just the command name, which is followed by the full signature as text (but not heading). We do, however, want to retain the full command sig in the anchor ID in order to avoid ambiguation in a case like 'exec foo' and 'exec bar foo', which would have the same headins, but in a different context.
Work for this issue will begin begin on branch work-liquid-labs/command-line-documentation/9.
|
2025-04-01T06:39:26.304163
| 2024-11-16T04:22:43
|
2663743514
|
{
"authors": [
"bjneff13",
"bnizette-li"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7895",
"repo": "liquidinstruments/moku-examples",
"url": "https://github.com/liquidinstruments/moku-examples/pull/6"
}
|
gharchive/pull-request
|
Brian Neff added MCC clock divider example to basic package
Ben, I just wanted to try a very simple example to get this started. If this all works, I'll start pushing more.
Merged in a new PR #10
|
2025-04-01T06:39:26.336659
| 2021-06-18T14:13:22
|
924948504
|
{
"authors": [
"lisphilar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7896",
"repo": "lisphilar/covid19-sir",
"url": "https://github.com/lisphilar/covid19-sir/issues/833"
}
|
gharchive/issue
|
[New/Revise] improve parameter estimation performance with constant liar and shorter timeout_iteration
Summary of this new feature
Improve performance (estimation score and runtime) of parameter estimation with the following solutions.
improve estmation score with constant liar
optuna provides new option constant_liar of TPESampler at version 2.8.0. Constant Liar heuristic reduces search effort, avoiding trials which trys similar parameter sets. Please refer to their detailed explanations and discussions with Optuna version 2.8.0 release note. It will be great for CovsirPhy users to use constant_liar=True if Optuna version 2.8.0 is available in our environments.
Improve runtime with shorter time_iteration
At version 2.20.3, Scenario.estimate(timeout_iteration=5) is the default value. Estimation score (RMSLE as default) is calculated every five seconds and the socre was not changed for tail_n=4 iterations, estimation will be stopped and best parameter set will be returned. However, with my tests, timeout_iteration appears to be a bottleneck. Many phases runs 5 seconds. (i.e. when timeout_iteration is shorter, runtime may be shorter.)
Note regarding constant liar:
constant_liar argument cannot be applied with Optuna version 2.7.0 or older.
https://gist.github.com/lisphilar/6440b5d69c4984bb0b34ede8c8ebcca3
TypeError means we use Optuna version 2.7.0 or older. When ``covsirphygetTypeErrorwithconstant_liarargument, it should remove the arument and retry creatingTPESampler`.
At version CovsirPhy 2.3.0 with Italy data (as of 18Jun2021), example/scenario_analysis.py and 8 CPUs at my local environment, parameter estimation completed with RMSLE=0.0795 in 2 min 22 sec.
(Please ignore accuracy of the last phase of Forecast scenario because this is a forecasted future phase.)
.
I compared the performances, changing constant_liar and timeout_iteration with Italy data as of 18Jun2021, my local environment and CovsirPhy version 2.20.3-theta. I used only 1 CPU with n_jobs=1 to get robust values of runtime as total value of all phases. Parameter estimation of each phase was done seaquencially. Code are as follows.
import covsirphy as cs
loader = cs.DataLoader()
jhu_data = loader.jhu()
snl = cs.Scenario(country="Italy")
snl.register(jhu_data)
snl.trend()
snl.estimate(cs.SIRF, n_jobs=1)
print(f"RMSLE: {snl.score(metric='RMSLE')}")
Results are here.
RMSLE (runtime)
constant_liar=False
constant_liar=True
timeout_iteration=5
0.06810 (13 min 22 sec)
0.06868 (17 min 42 sec)
timeout_iteration=4
0.06812 (14 min 03 sec)
0.06869 (14 min 07 sec)
timeout_iteration=3
0.06808 (10 min 10 sec)
0.06871 (10 min 31 sec)
timeout_iteration=2
0.06811 (07 min 55 sec)
0.06865 (07 min 11 sec)
timeout_iteration=1
0.06806 (03 min 21 sec)
0.06901 (03 min 53 sec)
I expected constant_liar=True and timeout_iteration=1 would show the best performance, but these results indicated constant_liar=False and timeout_iteration=1. I will create a pull request for constant_liar=False and timeout_iteration=1. These default values may be changed later if we get different results with the other countries' data.
With #833, timeout_iteration=1 will be default value for Scenario.esitmate(). constant_liar=False as-is explicitly.
Later, I will add constant_liar=False as an argument of Scenario.estimate(), if necessary.
WIth #835, user can select whether use constant liar or not with Scenario.esitmate(<model>, constant_liar=False) (default).
I compared RMSLE scores and runtime of constant_liar=False (default at this time) and constant_liar=True with some countries' datasets. I used example/scenario_analysis.py with 8 CPUs.
For Netherlands and Russia, it will be better to use Scenario.estimate(cs.SIRF, constant_liar=True).
Runtime of parameter estimation will be quite shorter with timeout_iteration=1 (default). Version 2.21.0 release was planed in Jul2021, but this should be moved up to Jun2021. Tomorrow or within some days.
|
2025-04-01T06:39:26.349010
| 2023-11-03T11:09:47
|
1975951638
|
{
"authors": [
"AndrewJakubowicz",
"VandeurenGlenn",
"augustjk"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7897",
"repo": "lit/lit.dev",
"url": "https://github.com/lit/lit.dev/issues/1249"
}
|
gharchive/issue
|
Can't run lit.dev on windows
When trying to run build the script fails when running the fonts:manrope script
rm (does not exist on windows)
cp (does not exist on windows)
mkdir -p (invalid syntax on windows)
I fixed those by changing the command by node scripts/fonts.js
After that some error with samples (D:\Workspace\lit\lit.dev\packages\lit-dev-content\samples\js_check-code-helpers.ts) is in a wrong path or the scipt is searching the wrong one
Thank you for filing this issue! Could _check-code-helpers.js not have been built into its expected location?
I only have occasional access to a Windows machine, but would happily review PRs that move us towards building on Windows.
Hi @AndrewJakubowicz, thanks!
I'm back to windows all the time for a long time now, spilled monster on the MBP 😢
The _check-code-helpers.js file is in the samples dir while the script expects it to be in samples/js
But in generate-js-samples.ts I can see that the js folder should not be included into the glob to pass to TS so maybe that is the problem.
https://github.com/lit/lit.dev/pull/1255
Hmm, now I'm getting a rollup error
Nobody else is getting that?
No clue
Thank you for raising this! We definitely should try and make this repo buildable in Windows.
In the meantime, I have found WSL2 to be quite good as a dev environment on Windows.
@augustjk
True WSL is great, but also a hassle plus somehow breaks from time to time imo WSL i awesome when your cross compiling etc,but for a node project it's quite simple to get it working the issues are really small like using the correct separator for paths.
Only issue is with eleventy now all the other stuff I already fixed (if Mac and Linux isn't broken now).
WSL doesn't work also, just installed new distro & think it tries to use npm installed on the windows side.
|
2025-04-01T06:39:26.366650
| 2023-09-25T06:24:34
|
1910756659
|
{
"authors": [
"Guldoman",
"zen0bit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7898",
"repo": "lite-xl/lite-xl-plugins",
"url": "https://github.com/lite-xl/lite-xl-plugins/pull/306"
}
|
gharchive/pull-request
|
ipc
(open in new tab instead of new window)
I was looking for this funcion, but must ask on discord.
Not obvious from README.md...
The README is generated from manifest.json so that's the one that needs to be updated.
I think something like Adds inter-process communication support, single-instance mode and tab drag and drop between instances. would be more explanatory.
|
2025-04-01T06:39:26.400258
| 2017-04-27T23:49:32
|
224943469
|
{
"authors": [
"bearpig",
"losh11"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7899",
"repo": "litecoin-association/LoafWallet",
"url": "https://github.com/litecoin-association/LoafWallet/issues/23"
}
|
gharchive/issue
|
iCloud backup Wallet seed
As Loaf/Bread Wallet uses something on the device as a seed for the wallet creation, do you know if replacing the processor in the phone will maintain whatever is used for this seed (Device ID/IMEI/serial number)?
I've had a phone die on me, and don't have the recovery phrase for the wallet. The phone is currently stuck in recovery mode and is returning an error that is consistent with a processor failure. I can get the processor replaced which would allow the phone to boot and for me to recover from iCloud but if it's not going to rebuild the same wallet, then I won't go ahead with it.
Any help is appreciated.
Do you have the 12 word seed (passphrase) that was generated when you first
used LoafWallet. With that 12 word seed, you can restore your wallet.
If you are able to get someone to replace your iPhone's processor and
unlock your phone without restoring, then there would be a somewhat high
chance that you are able to recover your wallet. At any point, if you end
up restoring your device, you will loose all of your coins, as LoafWallet
does not back up your seed/private keys to iCloud (for obvious reasons).
If you do end up recovering your wallet, please make sure that you go into
settings and then copy down your 12 word seed, just in case anything like
this happens in the future.
On 28 April 2017 at 00:49, Robbie Andrews<EMAIL_ADDRESS>wrote:
As Loaf/Bread Wallet uses something on the device as a seed for the wallet
creation, do you know if replacing the processor in the phone will maintain
whatever is used for this seed (Device ID/IMEI/serial number)?
I've had a phone die on me, and don't have the recovery phrase for the
wallet. The phone is currently stuck in recovery mode and is returning an
error that is consistent with a processor failure. I can get the processor
replaced which would allow the phone to boot and for me to recover from
iCloud but if it's not going to rebuild the same wallet, then I won't go
ahead with it.
Any help is appreciated.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/litecoin-association/LoafWallet/issues/23, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ACfNukf1gkynnP3dYnbh2jZ4Rh7kSzjzks5r0SmMgaJpZM4NK5fD
.
Hi losh11,
Thanks for the quick reply.
I'd assumed that the iCloud backups would work in a similar way to Bread wallets method (as stated here ).
|
2025-04-01T06:39:26.407260
| 2023-04-09T09:08:59
|
1659807451
|
{
"authors": [
"josikie",
"kcw-grunt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7900",
"repo": "litecoin-foundation/litewallet-android",
"url": "https://github.com/litecoin-foundation/litewallet-android/pull/138"
}
|
gharchive/pull-request
|
🚀[ Release v.2.8.4] Merge into Main
Overview
This is the last major release prior to work on Newborn, the refactored Litewallet Android. While there are many requests to improve the current codebase, it is actually 7 years of patching and rework and the cost / time of maintenance is no longer worth it.
We looked at the most important features needed and addressed them in this release.
They are:
Bech32 support for sending to 1ltc addresses
Allow user to see their 12 words / seed phrase
User preferences for sync vs anonymity (false positives rate)
Clips
Note some views
Show seed phrase
Add user preference False Positives
Wow, looks nice! @kcw-grunt Why is inside onClick empty for below code?
if (BuildConfig.VERSION_NAME == "v2.8.4") {
Snackbar.make(parentLayout,
R.string.release_notes,
Snackbar.LENGTH_INDEFINITE).setAction(R.string.Webview_dismiss, new View.OnClickListener() {
@Override
public void onClick(View view) {
}
})
.setActionTextColor(getResources().getColor(android.R.color.holo_red_light ))
.show();
}
The comment said to show, what 'false' does on items.add on code line 130-134 file SettingsActivity.java?
Why do we need to clear DB table to enable Bech32 features?
Thanks @josikie ...you are too kind.
Why do we need to clear DB table to enable Bech32 features?:
The legacy db used a different schema for ltc addresses. So, adding new addresses (ltc1) would fail in that old db. One of the steps @vsima added was to have the device wipe the existing the db and add transactions to the new schema so now sending of L, M and ltc1 addresses is readable by the new schema.
The comment said to show, what 'false' does on items.add on code line 130-134 file SettingsActivity.java?
This is just a implementation detail in Android/Java Settings tables. So, it distinguishes a table item (section: false) to a table section (section: true). Truth is I just used the existing design and added the item for Show my seed
Thank you for the explanation! @kcw-grunt
|
2025-04-01T06:39:26.409412
| 2020-04-20T09:54:19
|
603103815
|
{
"authors": [
"antho1404"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7901",
"repo": "liteflow-labs/liteflow-js",
"url": "https://github.com/liteflow-labs/liteflow-js/pull/39"
}
|
gharchive/pull-request
|
Replace deploy:service/deploy:process command with deploy
Dependency: https://github.com/liteflow-labs/liteflow-js/pull/38
Add liteflow deploy command that deploys all processes in a directory based on the liteflow framework structure.
All process-related services will automatically be deployed and started
Closing in favor of #42 that already includes these changes
|
2025-04-01T06:39:26.434148
| 2015-09-09T19:52:06
|
105671703
|
{
"authors": [
"jbardin"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7902",
"repo": "litl/galaxy",
"url": "https://github.com/litl/galaxy/issues/267"
}
|
gharchive/issue
|
Signal AWS when applications are deployed
We currently can't do rolling upgrades of the galaxy image because the apps often take longer to deploy than the instances.
Galaxy needs to lookup all applications that should be running on a host, and notify the ASG when deployment is complete.
The ASG can have an UpdatePolicy, or the stack can contain a CreationPolicy to define when an instance is ready. Galaxy can use the API or the cfn-signal script for notification.
Since we've removed the cloudformation dependency from galaxy, this should be implemented in such a way that it's not coupled to AWS.
A callback command when the host is up should suffice.
|
2025-04-01T06:39:26.462375
| 2024-08-10T21:07:09
|
2459290571
|
{
"authors": [
"luisdavim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7904",
"repo": "liuchengxu/vim-clap",
"url": "https://github.com/liuchengxu/vim-clap/issues/1088"
}
|
gharchive/issue
|
Compile error on Termux (Android)
OS: Android (Termux)
vim-clap version: Latest from main.
Describe the bug
I get a compilation error when trying to upgrade:
error[E0282]: type annotations needed for `Box<_>`
--> /data/data/com.termux/files/home/.cargo/registry/src/index.crates.io-6f17d22bba15001f/time-0.3.34/src/format_description/parse/mod.rs:83:9
|
83 | let items = format_items
| ^^^^^
...
86 | Ok(items.into())
| ---- type must be known at this point
|
help: consider giving `items` an explicit type, where the placeholders `_` are specified
|
83 | let items: Box<_> = format_items
| ++++++++
Compiling utils v0.1.54 (/data/data/com.termux/files/home/.vim/bundle/vim-clap/crates/utils)
For more information about this error, try `rustc --explain E0282`.
error: could not compile `time` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
To Reproduce
Steps to reproduce the behavior:
Just ran Plugupdate, also tried cargo build --release --target aarch64-linux-android
running cargo update -p time seems to solve the issue.
|
2025-04-01T06:39:26.480073
| 2023-04-04T03:46:56
|
1653122452
|
{
"authors": [
"AIhasArrived",
"Gillwindy",
"NijiharaTsubasa",
"chenxvb",
"gak123",
"liujing04"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7905",
"repo": "liujing04/Retrieval-based-Voice-Conversion-WebUI",
"url": "https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/9"
}
|
gharchive/issue
|
pip install -r requirements.txt报错
请问是故意的还是不小心的←_←为什么会有googleads
Collecting googleads==3.8.0
Using cached https://mirrors.aliyun.com/pypi/packages/fa/f8/f84ad483afaa29bfc807ab6e8a06b6712ee494a2aad7db545865655bdf99/googleads-3.8.0.tar.gz (23 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in googleads setup command: use_2to3 is invalid.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
删除googleads之后
Collecting ruamel-yaml-conda
Using cached https://mirrors.aliyun.com/pypi/packages/94/ef/31bfa8456e01ff1 Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [19 lines of output]
Traceback (most recent call last):
File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/__init__.py", line 21, in <module>
from .main import * # NOQA File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/main.py", line 12, in <module>
import ruamel.yaml ModuleNotFoundError: No module named 'ruamel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/setup.py", line 14, in <module>
import ruamel_yaml # NOQA File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/__init__.py", line 23, in <module>
from ruamel_yaml.main import * # NOQA File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/main.py", line 12, in <module> import ruamel.yaml ModuleNotFoundError: No module named 'ruamel'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
手动运行了pip install ruamel.yaml和conda install ruamel.yaml(两个指令都是Google搜的)之后
我搞不定了,救命
Collecting ruamel-yaml-conda Using cached https://mirrors.aliyun.com/pypi/packages/94/ef/31bfa8456e01ff13d8d98bdbc80ab2e592c830e52ccaff62c35d5f890357/ruamel_yaml_conda-0.15.80.tar.gz (202 kB)
Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-nsnxrkh6/ruamel-yaml-conda_cfce9d13e9ae4a85b8c11d278dffd58b/setup.py", line 35, in <module>
ext_modules=cythonize(extensions), File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 970, in cythonize
module_list, module_metadata = create_extension_list( File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 816, in create_extension_list
for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 114, in nonempty
raise ValueError(error_msg) ValueError: 'ruamel_yaml/ext/_ruamel_yaml.pyx' doesn't match any files
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
总之希望能有个手动配置环境的说明,或者至少把需要的python版本一类的写清楚吧,不是所有用户都是程序员,也不是所有程序员都懂ai(比如我。。。)
ruamel yaml在conda里装可以试试 conda install -c conda-forge ruamel_yaml
手动运行了pip install ruamel.yaml和conda install ruamel.yaml(两个指令都是Google搜的)之后
我搞不定了,救命
Collecting ruamel-yaml-conda Using cached https://mirrors.aliyun.com/pypi/packages/94/ef/31bfa8456e01ff13d8d98bdbc80ab2e592c830e52ccaff62c35d5f890357/ruamel_yaml_conda-0.15.80.tar.gz (202 kB)
Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-nsnxrkh6/ruamel-yaml-conda_cfce9d13e9ae4a85b8c11d278dffd58b/setup.py", line 35, in <module>
ext_modules=cythonize(extensions), File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 970, in cythonize
module_list, module_metadata = create_extension_list( File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 816, in create_extension_list
for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 114, in nonempty
raise ValueError(error_msg) ValueError: 'ruamel_yaml/ext/_ruamel_yaml.pyx' doesn't match any files
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
ruamel yaml在conda里装可以试试 conda install -c conda-forge ruamel_yaml
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的)
我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的) 我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
Windows可以参考diff-svc的部署教程,因为我就是参考这个部署的,装不上的包能删就删了。
https://diff-svc.gitbook.io/the-beginners-guide-to-diff-svc/setting-up/setting-up-the-environment
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的) 我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
Windows可以参考diff-svc的部署教程,因为我就是参考这个部署的,装不上的包能删就删了。 https://diff-svc.gitbook.io/the-beginners-guide-to-diff-svc/setting-up/setting-up-the-environment
Linux。。。
我的锅,req是在3.8环境下导出的,因此存在一些奇怪的依赖错误
晚点我根据windows版一键包试着在3.9下重新配一遍吧
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的) 我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
我试了用 pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 来装 torch 2.0,后面能启动,但是不停报错 FileNotFoundError: [Errno 2] No such file or directory: 'weights/[]',其它的
均已修复。
Hello @NijiharaTsubasa @Gillwindy @chenxvb @ricecakey06
I have spent 3 entire weeks trying to find a way to clone correclty voices and I still did not get good results, I am so tired of it. I am contacting you because I saw you had old comments under old issues, you haev probably found better ways since then? Could you save me from my misery and direct me towards a method, a repo, a tutorial or anything that helps get to the point where I can actually clone a voice thats looks similar to the cloned voice please? Help my soul lol. Really.
|
2025-04-01T06:39:26.483106
| 2023-03-03T16:05:56
|
1608895351
|
{
"authors": [
"brappier",
"liuliu"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7906",
"repo": "liuliu/s4nnc",
"url": "https://github.com/liuliu/s4nnc/issues/7"
}
|
gharchive/issue
|
An easy interface to add custom accelerators / backends?
There should be some easy framework, so that i can easily add my ops for custom accelerator / framework.
I wanted to see if i can add easily port it to a proprietary chip aiming to outcompete M1 GPU.
I really like TFs framework for adding custom backends but its too big.
If you can sigh NDAs and stuff, can share more details.
Thanks for the offering! Sorry I didn't sign NDAs without knowing more details. (i.e. if NDA is about custom accelerators alone, we can discuss more in private channels).
Also, you can check out tinygrad: https://github.com/geohot/tinygrad which supposedly should be easy to add custom backends.
|
2025-04-01T06:39:26.557554
| 2021-08-11T21:34:25
|
967502682
|
{
"authors": [
"Deathklok-97"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7908",
"repo": "livehelpnow/tds",
"url": "https://github.com/livehelpnow/tds/pull/125"
}
|
gharchive/pull-request
|
Working version of tvp to stored proc
Updated dependencies, merged with dizzy:masters and confirmed working.
confirmed working to sqlserver 2019 windows env
Updated version of #49
|
2025-04-01T06:39:26.558350
| 2024-07-18T01:13:06
|
2414940497
|
{
"authors": [
"davidzhao",
"zuyou-alt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7909",
"repo": "livekit/livekit",
"url": "https://github.com/livekit/livekit/issues/2875"
}
|
gharchive/issue
|
How to type rpm packets
Due to the requirements of the business scenario, we need to package this component as rpm
sorry, we do not currently offer official RPM distributions. You can use GoReleaser to build your own
|
2025-04-01T06:39:26.562191
| 2019-02-21T01:38:20
|
412708322
|
{
"authors": [
"adamsoffer",
"iameli"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7910",
"repo": "livepeer/livepeerjs",
"url": "https://github.com/livepeer/livepeerjs/issues/340"
}
|
gharchive/issue
|
Explorer round count sometimes shows up negative
Describe the bug (required)
The "rounds remaining" indicator displays incorrect data while the data is still loading from Infura. I believe I introduced this with https://github.com/livepeer/livepeerjs/pull/333, as that switched up the render logic on some of the GraphQL data.
Expected behavior (required)
This seems to show up after a few seconds.
To Reproduce (required)
Steps to reproduce the behavior:
Boot up the explorer.
Immediately click on the "round" thing in the upper left.
That'll show up.
Closing since the classic explorer was sunsetted.
Closing since the classic explorer was sunsetted.
|
2025-04-01T06:39:26.564716
| 2023-03-13T19:45:15
|
1622150254
|
{
"authors": [
"AZholtkevych",
"carson-katri"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7911",
"repo": "liveview-native/liveview-client-swiftui",
"url": "https://github.com/liveview-native/liveview-client-swiftui/issues/613"
}
|
gharchive/issue
|
Modifier -> SwiftUI -> Documents: renameAction(_:)
Doc:
https://developer.apple.com/documentation/swiftui/view/renameaction(_:)-6lghl
[x] Swift implementation
[x] Elixir implementation
https://developer.apple.com/documentation/swiftui/view/renameaction(_:)-324yw
[x] Swift implementation
[x] Elixir implementation
Implemented in #326
|
2025-04-01T06:39:26.566722
| 2021-08-17T12:36:55
|
972643073
|
{
"authors": [
"PhiloNL",
"sandy15d"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7912",
"repo": "livewire-ui/spotlight",
"url": "https://github.com/livewire-ui/spotlight/issues/39"
}
|
gharchive/issue
|
RegisterCommandIf Not working
` public function boot()
{
Spotlight::registerCommandIf(Auth::check() && Auth::user()->role == 'A',Logout::class);
}`
@sandy15d please use the shouldBeShown method on the command when working with dependencies that need to be resolved:
public function shouldBeShown(Request $request): bool
{
return $request->user()->role == 'A;
}
More info: https://github.com/livewire-ui/spotlight#register-commands
|
2025-04-01T06:39:26.584795
| 2023-12-15T07:27:09
|
2043053523
|
{
"authors": [
"lixin4ever",
"youngfish42"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7913",
"repo": "lixin4ever/Conference-Acceptance-Rate",
"url": "https://github.com/lixin4ever/Conference-Acceptance-Rate/pull/81"
}
|
gharchive/pull-request
|
update ICASSP'24 info
Source of information on the number of accepted papers: https://cmsworkshops.com/ICASSP2024/papers/accepted_papers.php
Source of information on paper acceptance rate and number of valid submissions: official notification email.
The numbers from different channels are slightly different.
Thanks.
|
2025-04-01T06:39:26.589493
| 2017-02-25T08:42:19
|
210221537
|
{
"authors": [
"liyanlong"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7914",
"repo": "liyanlong/nuxt-egg",
"url": "https://github.com/liyanlong/nuxt-egg/issues/3"
}
|
gharchive/issue
|
NODE_ENV=production egg-bin dev Instead of nuxt build
use NODE_ENV=production egg-bin dev to run server.
it has two steps.
nuxt build
egg start
Done.
|
2025-04-01T06:39:26.636979
| 2016-01-21T11:15:07
|
127901726
|
{
"authors": [
"paween1980"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7916",
"repo": "lloeki/ex-mode",
"url": "https://github.com/lloeki/ex-mode/issues/126"
}
|
gharchive/issue
|
dw [delete word] at last word of line should not shift a line below
I faced a problem when I use command 'dw' [delete word] at last word of line. A below line will be shifted to current line.
[example document]
line1,col2,col3
line2,col2,col3
line3,col2,col3
[after dw at 'col3' on line1]
line1,col2,line2,col2,col3
line3,col2,col3
[expected result]
line1,col2,
line2,col2,col3
line3,col2,col3
Oh sorry. I think I should post this issue to vim-mode.
|
2025-04-01T06:39:26.822633
| 2016-05-19T05:56:22
|
155659716
|
{
"authors": [
"lapin-b",
"lmatteis"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7917",
"repo": "lmatteis/peer-tweet",
"url": "https://github.com/lmatteis/peer-tweet/issues/11"
}
|
gharchive/issue
|
Cannot get Peer-Tweet running
Everything's in the title.
I followed the instructions on installing. I ran
npm install
npm install --save-dev electron-rebuild
./node_modules/.bin/electron-rebuild
After launching with 2 separate instance of terminal with these commands
npm run hot-server
npm run start-hot
When I start the second command line, I get this output
><EMAIL_ADDRESS>start-hot /home/l4p1n/peer-tweet
> cross-env HOT=1 NODE_ENV=development electron ./
(electron) companyName is now a required option to crashReporter.start
Error opening app
The app provided is not a valid Electron app, please read the docs on how to write one:
https://github.com/atom/electron/tree/v0.36.12/docs
Error: Cannot find module 'electron-debug'
npm ERR! Linux 4.2.0-36-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "start-hot"
npm ERR! node v4.4.4
npm ERR! npm v3.8.9
npm ERR! code ELIFECYCLE
npm ERR<EMAIL_ADDRESS>start-hot: `cross-env HOT=1 NODE_ENV=development electron ./`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the<EMAIL_ADDRESS>start-hot script 'cross-env HOT=1 NODE_ENV=development electron ./'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the PeerTweet package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! cross-env HOT=1 NODE_ENV=development electron ./
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs PeerTweet
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls PeerTweet
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/l4p1n/peer-tweet/npm-debug.log
And here is the content of /home/l4p1n/peer-tweet/npm-debug.log
0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/nodejs', '/usr/bin/npm', 'run', 'start-hot' ]
2 info using<EMAIL_ADDRESS>3 info using<EMAIL_ADDRESS>4 verbose run-script [ 'prestart-hot', 'start-hot', 'poststart-hot' ]
5 info lifecycle<EMAIL_ADDRESS><EMAIL_ADDRESS>6 silly lifecycle<EMAIL_ADDRESS>no script for prestart-hot, continuing
7 info lifecycle<EMAIL_ADDRESS><EMAIL_ADDRESS>8 verbose lifecycle<EMAIL_ADDRESS>unsafe-perm in lifecycle true
9 verbose lifecycle<EMAIL_ADDRESS>PATH: /usr/lib/node_modules/npm/bin/node-gyp-bin:/home/l4p1n/peer-tweet/node_modules/.bin:/usr/bin:/home/l4p1n/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
10 verbose lifecycle<EMAIL_ADDRESS>CWD: /home/l4p1n/peer-tweet
11 silly lifecycle<EMAIL_ADDRESS>Args: [ '-c', 'cross-env HOT=1 NODE_ENV=development electron ./' ]
12 silly lifecycle<EMAIL_ADDRESS>Returned: code: 1 signal: null
13 info lifecycle<EMAIL_ADDRESS>Failed to exec start-hot script
14 verbose stack Error<EMAIL_ADDRESS>start-hot: `cross-env HOT=1 NODE_ENV=development electron ./`
14 verbose stack Exit status 1
14 verbose stack at EventEmitter.<anonymous> (/usr/lib/node_modules/npm/lib/utils/lifecycle.js:245:16)
14 verbose stack at emitTwo (events.js:87:13)
14 verbose stack at EventEmitter.emit (events.js:172:7)
14 verbose stack at ChildProcess.<anonymous> (/usr/lib/node_modules/npm/lib/utils/spawn.js:24:14)
14 verbose stack at emitTwo (events.js:87:13)
14 verbose stack at ChildProcess.emit (events.js:172:7)
14 verbose stack at maybeClose (internal/child_process.js:827:16)
14 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:5)
15 verbose pkgid<EMAIL_ADDRESS>16 verbose cwd /home/l4p1n/peer-tweet
17 error Linux 4.2.0-36-generic
18 error argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "start-hot"
19 error node v4.4.4
20 error npm v3.8.9
21 error code ELIFECYCLE
22 error<EMAIL_ADDRESS>start-hot: `cross-env HOT=1 NODE_ENV=development electron ./`
22 error Exit status 1
23 error Failed at the<EMAIL_ADDRESS>start-hot script 'cross-env HOT=1 NODE_ENV=development electron ./'.
23 error Make sure you have the latest version of node.js and npm installed.
23 error If you do, this is most likely a problem with the PeerTweet package,
23 error not with npm itself.
23 error Tell the author that this fails on your system:
23 error cross-env HOT=1 NODE_ENV=development electron ./
23 error You can get information on how to open an issue for this project with:
23 error npm bugs PeerTweet
23 error Or if that isn't available, you can get their info via:
23 error npm owner ls PeerTweet
23 error There is likely additional logging output above.
24 verbose exit [ 1, true ]
I guess I have to run npm install --dev but I prefer to ask to make sure.
Did it work with --dev? I don't have a linux distro to test this on.
So. I cloned a fresh copy of the repo, ran
npm install --dev
npm install electron-debug
Then started the server with npm run hot-server and started the client npm run start-hot. Until there everything is fine.
I've got another problem with the client saying in the devtools
Error: Module version mismatch. Expected 47, got 46.
I've got no idea of what going on.
The problem is that you need to install the native modules: https://github.com/lmatteis/peer-tweet#installing-native-modules
I'm not sure how to do that in linux.
Everything work. It works better if I read properly the README.md :joy:
|
2025-04-01T06:39:26.839311
| 2023-09-05T13:05:51
|
1881944269
|
{
"authors": [
"DoktorShift",
"arbadacarbaYK",
"bitkarrot",
"dni",
"talvasconcelos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7918",
"repo": "lnbits/lnbits",
"url": "https://github.com/lnbits/lnbits/issues/1912"
}
|
gharchive/issue
|
Formatting the LNBits Frontpage Description
Issue:
When entering the LNBits Frontpage there is a description like you can see in the following Screenshot:
This lines have no formating rules. So the lines set just next to next. It would be lovely for reading and serve informaion in a well formatted architecture.
Solution:
My solution would be Rich Text Editors oder WYSIWYG-Editors (What You See Is What You Get). That would allow also the dumpest presumed user to edit the text.
Output:
A well arranged/readable start page with important information about the server
this can be set in the manage server section
in the manage server section i´am able to edit the text. But it´s not displayed like that on the LNBits frontpage.
see screenshots:
Try it with :
<h2Family Bank</h2>
<h4>Playground</h4>
<p>For testing extensions and other stuff</p>
<p>**************************************</p>
<p>Do not store large amounts here</p>
Try it with :
<h2Family Bank</h2>
<h4>Playground</h4>
<p>For testing extensions and other stuff</p>
<p>**************************************</p>
<p>Do not store large amounts here</p>
Ok, this is not nice :) Could it also allow paragraphs ?
<h2**>**Family Bank
Playground
For testing extensions and other stuff
**************************************
Do not store large amounts here
If it would work this way it would be a solution for me and a bunch of people.
On the other hand everybody is well known with simple formatting rules in common programs like gmail, telegram and also here on Github. In my opinion the aim should be to reduce every not needed effort to present a well arranged/readable start page with important information about the server.
It's a simple HTML syntax! Github uses Markdown...
Maybe adding Markdown support in the future is an option
https://markdoc.dev/
I did it here: https://github.com/lnbits/events/pull/10
@arcbtc worth doing it for the frontpage description also?
i think it worth it, it going to be useful for other extensions aswell
Really ? People need to know markdown or html to put in a description ?
Thanks for your work. Its now working with ease.
pls close this issue.
|
2025-04-01T06:39:26.843067
| 2023-02-22T11:09:31
|
1594910592
|
{
"authors": [
"dioptre",
"lni",
"ultperf"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7919",
"repo": "lni/dragonboat",
"url": "https://github.com/lni/dragonboat/issues/273"
}
|
gharchive/issue
|
possible to support https://github.com/bytedance/terarkdb
pls support this as storage backend
https://github.com/bytedance/terarkdb
No, it will not be supported.
First, it is a RocksDB based KV store, storing Raft logs in KV stores is always wasteful. We have a much better storage engine for the logs, it is available at -
https://github.com/lni/dragonboat/tree/master/internal/tan
This engine, called tan, doesn't force you to have keys, why would you need to touch or construct trillions of keys when the leader just want to stream continuous entries to followers. It doesn't do compactions, as the log entries in raft are mostly append only. You also avoid some write amplification when you stop writing logs twice - you don't need to log your log. Its memtable is another redundant component when we already have an in memory log storage inside the raft implementation - inserting into that skiplist based memtable eats a huge chunk of your CPU cycles when you have millions of entries per second.
I'd be willing to be bet that tan is at least 20-30% faster than your suggested library when used for storing raft logs.
Secondly, that suggested library is C++ based.
Think @kolinfluence you should add it.
|
2025-04-01T06:39:26.854034
| 2023-09-12T03:56:43
|
1891620789
|
{
"authors": [
"codecov-commenter",
"lni",
"tylerwilliams"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7920",
"repo": "lni/dragonboat",
"url": "https://github.com/lni/dragonboat/pull/327"
}
|
gharchive/pull-request
|
Support external node registry functions
This PR allows clients to provide a NodeRegistryFactory function in the Expert config section which will be used to resolve nodes.
This is useful for clients who want to create and manage a node discovery service externally (so it can be used for other things) but still have the dragonboat library use it for dynamic node discovery.
Also adds a test for this new functionality.
Fixes: https://github.com/lni/dragonboat/issues/326
Codecov Report
Patch coverage is 50.00% of modified lines.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Files Changed
Coverage
node.go
ø
nodehost.go
50.00%
:loudspeaker: Thoughts on this report? Let us know!.
Thanks for the PR.
Could you please have a look at the review comments above. There is also some data race errors when running the new test, log pasted below.
=== RUN TestExternalNodeRegistryFunction
2023-09-19 08:31:38.652923 I | dragonboat: go version: go1.19.13, linux/amd64
2023-09-19 08:31:38.652958 I | dragonboat: dragonboat version: 4.0.0 (Dev)
2023-09-19 08:31:38.653001 W | config: mutual TLS disabled, communication is insecure
2023-09-19 08:31:38.653134 I | config: using default EngineConfig
2023-09-19 08:31:38.653166 I | config: using default LogDBConfig
2023-09-19 08:31:38.653248 I | dragonboat: DeploymentID set to 1
2023-09-19 08:31:38.660302 I | dragonboat: LogDB info received, shard 0, busy false
2023-09-19 08:31:38.665674 I | dragonboat: LogDB info received, shard 1, busy false
2023-09-19 08:31:38.669765 I | dragonboat: LogDB info received, shard 2, busy false
2023-09-19 08:31:38.674620 I | dragonboat: LogDB info received, shard 3, busy false
2023-09-19 08:31:38.677094 W | gossip: memberlist: Was able to connect to 123e4567-e89b-12d3-a456-426614174000 but other probes failed, network may be misconfigured
2023-09-19 08:31:38.679718 I | dragonboat: LogDB info received, shard 4, busy false
2023-09-19 08:31:38.684090 I | dragonboat: LogDB info received, shard 5, busy false
2023-09-19 08:31:38.689280 I | dragonboat: LogDB info received, shard 6, busy false
2023-09-19 08:31:38.693791 I | dragonboat: LogDB info received, shard 7, busy false
2023-09-19 08:31:38.699416 I | dragonboat: LogDB info received, shard 8, busy false
2023-09-19 08:31:38.704158 I | dragonboat: LogDB info received, shard 9, busy false
2023-09-19 08:31:38.709267 I | dragonboat: LogDB info received, shard 10, busy false
2023-09-19 08:31:38.713627 I | dragonboat: LogDB info received, shard 11, busy false
2023-09-19 08:31:38.718071 I | dragonboat: LogDB info received, shard 12, busy false
2023-09-19 08:31:38.722903 I | dragonboat: LogDB info received, shard 13, busy false
2023-09-19 08:31:38.728906 I | dragonboat: LogDB info received, shard 14, busy false
2023-09-19 08:31:38.733055 I | dragonboat: LogDB info received, shard 15, busy false
2023-09-19 08:31:38.733422 I | logdb: using plain logdb
2023-09-19 08:31:38.734863 I | dragonboat: logdb memory limit: 8192 MBytes
2023-09-19 08:31:38.735371 I | dragonboat: NodeHost ID: 123e4567-e89b-12d3-a456-426614174000
2023-09-19 08:31:38.735401 I | dragonboat: Expert.NodeRegistryFactory was set: using custom registry
2023-09-19 08:31:38.735440 I | dragonboat: filesystem error injection mode enabled: false
2023-09-19 08:31:38.736034 I | transport: transport type: go-tcp-transport
2023-09-19 08:31:38.737214 I | dragonboat: transport type: go-tcp-transport
2023-09-19 08:31:38.737253 I | dragonboat: logdb type: sharded-pebble
2023-09-19 08:31:38.737296 I | dragonboat: nodehost address: localhost:26001
2023-09-19 08:31:38.737322 I | dragonboat: go version: go1.19.13, linux/amd64
2023-09-19 08:31:38.737372 I | dragonboat: dragonboat version: 4.0.0 (Dev)
2023-09-19 08:31:38.737395 W | config: mutual TLS disabled, communication is insecure
2023-09-19 08:31:38.737490 I | config: using default EngineConfig
2023-09-19 08:31:38.737533 I | config: using default LogDBConfig
2023-09-19 08:31:38.737617 I | dragonboat: DeploymentID set to 1
2023-09-19 08:31:38.743690 I | dragonboat: LogDB info received, shard 0, busy false
2023-09-19 08:31:38.748062 I | dragonboat: LogDB info received, shard 1, busy false
2023-09-19 08:31:38.752225 I | dragonboat: LogDB info received, shard 2, busy false
2023-09-19 08:31:38.757578 I | dragonboat: LogDB info received, shard 3, busy false
2023-09-19 08:31:38.762896 I | dragonboat: LogDB info received, shard 4, busy false
2023-09-19 08:31:38.767749 I | dragonboat: LogDB info received, shard 5, busy false
2023-09-19 08:31:38.772257 I | dragonboat: LogDB info received, shard 6, busy false
2023-09-19 08:31:38.777120 I | dragonboat: LogDB info received, shard 7, busy false
2023-09-19 08:31:38.784119 I | dragonboat: LogDB info received, shard 8, busy false
2023-09-19 08:31:38.788517 I | dragonboat: LogDB info received, shard 9, busy false
2023-09-19 08:31:38.793300 I | dragonboat: LogDB info received, shard 10, busy false
2023-09-19 08:31:38.799423 I | dragonboat: LogDB info received, shard 11, busy false
2023-09-19 08:31:38.803587 I | dragonboat: LogDB info received, shard 12, busy false
2023-09-19 08:31:38.808046 I | dragonboat: LogDB info received, shard 13, busy false
2023-09-19 08:31:38.812889 I | dragonboat: LogDB info received, shard 14, busy false
2023-09-19 08:31:38.818779 I | dragonboat: LogDB info received, shard 15, busy false
2023-09-19 08:31:38.819076 I | logdb: using plain logdb
2023-09-19 08:31:38.820205 I | dragonboat: logdb memory limit: 8192 MBytes
2023-09-19 08:31:38.820907 I | dragonboat: NodeHost ID: 123e4567-e89b-12d3-a456-426614174001
2023-09-19 08:31:38.820938 I | dragonboat: Expert.NodeRegistryFactory was set: using custom registry
2023-09-19 08:31:38.820980 I | dragonboat: filesystem error injection mode enabled: false
2023-09-19 08:31:38.821880 I | transport: transport type: go-tcp-transport
2023-09-19 08:31:38.822814 I | dragonboat: transport type: go-tcp-transport
2023-09-19 08:31:38.822883 I | dragonboat: logdb type: sharded-pebble
2023-09-19 08:31:38.822919 I | dragonboat: nodehost address: localhost:26002
2023-09-19 08:31:38.826387 I | dragonboat: [00001:00001] replaying raft logs
2023-09-19 08:31:38.826569 I | raft: [00001:00001] created, initial: true, new: true
2023-09-19 08:31:38.826615 W | config: ElectionRTT is not a magnitude larger than HeartbeatRTT
2023-09-19 08:31:38.826656 I | raft: [00001:00001] raft log rate limit enabled: false, 0
2023-09-19 08:31:38.826715 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t0 became follower
2023-09-19 08:31:38.826801 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t1 became follower
2023-09-19 08:31:38.826860 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t1 added bootstrap ConfigChangeAddNode, 1, 123e4567-e89b-12d3-a456-426614174000
2023-09-19 08:31:38.826919 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t1 added bootstrap ConfigChangeAddNode, 2, 123e4567-e89b-12d3-a456-426614174001
2023-09-19 08:31:38.827428 I | rsm: [00001:00001] no snapshot available during launch
2023-09-19 08:31:38.827563 I | dragonboat: [00001:00001] initialized using <00001:00001:0>
2023-09-19 08:31:38.827605 I | dragonboat: [00001:00001] initial index set to 0
2023-09-19 08:31:38.830797 I | dragonboat: [00001:00002] replaying raft logs
2023-09-19 08:31:38.831038 I | raft: [00001:00002] created, initial: true, new: true
2023-09-19 08:31:38.831088 W | config: ElectionRTT is not a magnitude larger than HeartbeatRTT
2023-09-19 08:31:38.831138 I | raft: [00001:00002] raft log rate limit enabled: false, 0
2023-09-19 08:31:38.831323 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t0 became follower
2023-09-19 08:31:38.831408 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t1 became follower
2023-09-19 08:31:38.831480 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t1 added bootstrap ConfigChangeAddNode, 1, 123e4567-e89b-12d3-a456-426614174000
2023-09-19 08:31:38.831546 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t1 added bootstrap ConfigChangeAddNode, 2, 123e4567-e89b-12d3-a456-426614174001
2023-09-19 08:31:38.833185 I | rsm: [00001:00002] no snapshot available during launch
2023-09-19 08:31:38.833398 I | dragonboat: [00001:00002] initialized using <00001:00002:0>
2023-09-19 08:31:38.833461 I | dragonboat: [00001:00002] initial index set to 0
2023-09-19 08:31:38.834893 I | rsm: [00001:00002] applied ADD ccid 0 (1), n00001 (123e4567-e89b-12d3-a456-426614174000)
2023-09-19 08:31:38.835034 I | rsm: [00001:00002] applied ADD ccid 0 (2), n00002 (123e4567-e89b-12d3-a456-426614174001)
2023-09-19 08:31:38.837618 W | dragonboat: [00001:00001] had 2 LocalTick msgs in one batch
2023-09-19 08:31:38.838604 I | rsm: [00001:00001] applied ADD ccid 0 (1), n00001 (123e4567-e89b-12d3-a456-426614174000)
2023-09-19 08:31:38.838684 I | rsm: [00001:00001] applied ADD ccid 0 (2), n00002 (123e4567-e89b-12d3-a456-426614174001)
2023-09-19 08:31:38.853533 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 became candidate
2023-09-19 08:31:38.853619 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 received RequestVoteResp from n00002
2023-09-19 08:31:38.853673 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 sent RequestVote to n00001
2023-09-19 08:31:38.857429 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t1 received RequestVote with higher term (2) from n00002
2023-09-19 08:31:38.857485 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t1 become followerKE after receiving higher term from n00002
2023-09-19 08:31:38.857671 I | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t2 became follower
2023-09-19 08:31:38.857779 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t2 cast vote from n00002 index 2 term 2, log term: 1
2023-09-19 08:31:38.860333 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 received RequestVoteResp from n00001
2023-09-19 08:31:38.860407 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 received 2 votes and 0 rejections, quorum is 2
2023-09-19 08:31:38.860478 I | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 became leader
2023-09-19 08:31:38.935478 E | transport: send batch failed, target localhost:26002 (write tcp <IP_ADDRESS>:37262-><IP_ADDRESS>:26002: write: connection reset by peer), 2
2023-09-19 08:31:38.935607 W | transport: breaker 123e4567-e89b-12d3-a456-426614174000 to localhost:26002 failed, connect and process failed: write tcp <IP_ADDRESS>:37262-><IP_ADDRESS>:26002: write: connection reset by peer
2023-09-19 08:31:38.935682 W | transport: localhost:26002 became unreachable, affected 1 nodes
==================
WARNING: DATA RACE
Write at 0x00c0000eb110 by goroutine 6651:
runtime.mapassign_faststr()
/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/map_faststr.go:203 +0x0
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1320 +0xef7
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
Previous read at 0x00c0000eb110 by goroutine 6961:
runtime.mapaccess1_faststr()
/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/map_faststr.go:13 +0x0
github.com/lni/dragonboat/v4.(*testRegistry).Resolve()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1208 +0xe4
github.com/lni/dragonboat/v4/internal/transport.(*Transport).send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:361 +0xba
github.com/lni/dragonboat/v4/internal/transport.(*Transport).Send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:347 +0x68
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage()
/home/runner/work/dragonboat/dragonboat/nodehost.go:1881 +0xf4
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage-fm()
<autogenerated>:1 +0x84
github.com/lni/dragonboat/v4.(*node).sendMessages()
/home/runner/work/dragonboat/dragonboat/node.go:1011 +0x1b6
github.com/lni/dragonboat/v4.(*node).processRaftUpdate()
/home/runner/work/dragonboat/dragonboat/node.go:1108 +0xb3
github.com/lni/dragonboat/v4.(*engine).processSteps()
/home/runner/work/dragonboat/dragonboat/engine.go:1353 +0x804
github.com/lni/dragonboat/v4.(*engine).stepWorkerMain()
/home/runner/work/dragonboat/dragonboat/engine.go:1254 +0x5e6
github.com/lni/dragonboat/v4.newExecEngine.func1()
/home/runner/work/dragonboat/dragonboat/engine.go:1047 +0x98
github.com/lni/goutils/syncutil.(*Stopper).runWorker.func1()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:79 +0x12e
Goroutine 6651 (running) created at:
testing.(*T).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x75d
testing.runTests.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1846 +0x99
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.runTests()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1844 +0x7ec
testing.(*M).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1726 +0xa84
main.main()
_testmain.go:675 +0x2e9
Goroutine 6961 (running) created at:
github.com/lni/goutils/syncutil.(*Stopper).runWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:74 +0x19a
github.com/lni/goutils/syncutil.(*Stopper).RunWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:68 +0xef
github.com/lni/dragonboat/v4.newExecEngine()
/home/runner/work/dragonboat/dragonboat/engine.go:1037 +0xa19
github.com/lni/dragonboat/v4.NewNodeHost()
/home/runner/work/dragonboat/dragonboat/nodehost.go:366 +0x1486
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1266 +0x8f7
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
==================
==================
WARNING: DATA RACE
Write at 0x00c00048d178 by goroutine 6651:
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1320 +0xf38
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
Previous read at 0x00c00048d178 by goroutine 6961:
github.com/lni/dragonboat/v4.(*testRegistry).Resolve()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1208 +0xee
github.com/lni/dragonboat/v4/internal/transport.(*Transport).send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:361 +0xba
github.com/lni/dragonboat/v4/internal/transport.(*Transport).Send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:347 +0x68
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage()
/home/runner/work/dragonboat/dragonboat/nodehost.go:1881 +0xf4
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage-fm()
<autogenerated>:1 +0x84
github.com/lni/dragonboat/v4.(*node).sendMessages()
/home/runner/work/dragonboat/dragonboat/node.go:1011 +0x1b6
github.com/lni/dragonboat/v4.(*node).processRaftUpdate()
/home/runner/work/dragonboat/dragonboat/node.go:1108 +0xb3
github.com/lni/dragonboat/v4.(*engine).processSteps()
/home/runner/work/dragonboat/dragonboat/engine.go:1353 +0x804
github.com/lni/dragonboat/v4.(*engine).stepWorkerMain()
/home/runner/work/dragonboat/dragonboat/engine.go:1254 +0x5e6
github.com/lni/dragonboat/v4.newExecEngine.func1()
/home/runner/work/dragonboat/dragonboat/engine.go:1047 +0x98
github.com/lni/goutils/syncutil.(*Stopper).runWorker.func1()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:79 +0x12e
Goroutine 6651 (running) created at:
testing.(*T).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x75d
testing.runTests.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1846 +0x99
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.runTests()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1844 +0x7ec
testing.(*M).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1726 +0xa84
main.main()
_testmain.go:675 +0x2e9
Goroutine 6961 (running) created at:
github.com/lni/goutils/syncutil.(*Stopper).runWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:74 +0x19a
github.com/lni/goutils/syncutil.(*Stopper).RunWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:68 +0xef
github.com/lni/dragonboat/v4.newExecEngine()
/home/runner/work/dragonboat/dragonboat/engine.go:1037 +0xa19
github.com/lni/dragonboat/v4.NewNodeHost()
/home/runner/work/dragonboat/dragonboat/nodehost.go:366 +0x1486
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1266 +0x8f7
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
==================
Oh missed the data race one -- taking a look at that now.
OK, fixed the data race too.
Cool, thanks.
|
2025-04-01T06:39:26.856905
| 2016-07-22T23:22:03
|
167150525
|
{
"authors": [
"emersion"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7921",
"repo": "lnicola/certbot-systemd-nginx",
"url": "https://github.com/lnicola/certbot-systemd-nginx/issues/3"
}
|
gharchive/issue
|
Is --keep-until-expiring needed?
The renew command only generates new certificates if they are near expiry. See https://certbot.eff.org/docs/using.html#command-line-options
Thanks for all your fixes! :D
|
2025-04-01T06:39:26.863865
| 2016-01-27T15:43:29
|
129181205
|
{
"authors": [
"clarkie",
"simonmcmanus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7922",
"repo": "lnug/lnug.github.io",
"url": "https://github.com/lnug/lnug.github.io/issues/103"
}
|
gharchive/issue
|
handle multiple events
the site is currently showing info for the Feb event, which is cool but people probably still want to know what is going to be happening tonight at the January event.
Can we switch it back to Jan? I've checked my phone so many times on the way to the event to remind myself what the talks are going to be (and where it is!)
https://github.com/lnug/lnug.github.io/pull/104/files
On 27 January 2016 at 16:19, lnugbot<EMAIL_ADDRESS>wrote:
Yerp,
Agreed. Will take a look ASAP
I'm on a train into London atm. Will take a look when I can find somewhere
dry.
Sent from my iPhone
On 27 Jan 2016, at 15:57, Clarkie<EMAIL_ADDRESS>wrote:
Can we switch it back to Jan? I've checked my phone so many times on the
way to the event to remind myself what the talks are going to be (and where
it is!)
—
Reply to this email directly or view it on GitHub.
—
Reply to this email directly or view it on GitHub
https://github.com/lnug/lnug.github.io/issues/103#issuecomment-175719869
.
--
Simon McManus
DotJS Ltd - Node Consultancy
|
2025-04-01T06:39:26.903321
| 2016-10-18T10:02:32
|
183642051
|
{
"authors": [
"akashdeep-singh",
"robinjoseph08"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7923",
"repo": "lob/generate-changelog",
"url": "https://github.com/lob/generate-changelog/issues/8"
}
|
gharchive/issue
|
[Error: no commits found]
I'm getting the following error on a mac system:
[Error: no commits found]
There were 3 commits in the repo when I first saw the issue, and I added two more commits to test, but itt still wouldn't work.
Hmm that's odd. Could you let me know what version of git you're using? Also, when you run the following command, what is outputted?
git log -E --format=%H%n%s%n%b%n===END===
I realized the issue was with that particular developer not pushing with --tags, causing local tags to not be pushed into remote.
|
2025-04-01T06:39:26.915183
| 2024-06-17T12:29:21
|
2357228338
|
{
"authors": [
"NateWaldschmidt",
"shannamurry"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7924",
"repo": "lob/ui-components",
"url": "https://github.com/lob/ui-components/pull/517"
}
|
gharchive/pull-request
|
SELF-302: IconButton Variant
JIRA
SELF-343
Description
Adds IconButton text variant
Add Skeleton component
Screenshots
Reviewer Checklist
This section is to be filled out by reviewers
Testing
[ ] This code was tested by somebody other than the developer. Do not merge until this has been done.
Hey Nate! I'd like to make one suggestion - can we call this something more generic since we plan to build on this? Maybe like stylized button or something like that? This saves us having to change it in the dashboard later
Hey Nate! I'd like to make one suggestion - can we call this something more generic since we plan to build on this? Maybe like stylized button or something like that? This saves us having to change it in the dashboard later
Hey!
Are you referring to the name IconButton?
|
2025-04-01T06:39:26.941428
| 2021-11-17T13:29:32
|
1056133558
|
{
"authors": [
"lakkeger",
"whummer",
"wojciechszymski"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7925",
"repo": "localstack/localstack-java-utils",
"url": "https://github.com/localstack/localstack-java-utils/issues/81"
}
|
gharchive/issue
|
Is isRunning method works fine?
I try to make a simple e2e test (Java + Spring Framework) which check our API by stopping Localstack instance, sending message to broker instance and finally asserting HTTP error response code. This test is part of bigger test suite with DirtiesContext annotation (with after each method mode).
Out Localstack bean is customized. In Spring Configuration we defined a bean with custom init and destroy methods. Init method will be posted below, destroy method just send purge requests into all queues. We don't want to stop Localstack instance - time optimization.
Init method:
if (!localstack.isRunning()) {
localstack.startup(LOCALSTACK_CONFIGURATION);
Runtime.getRuntime().addShutdownHook(new Thread(localstack::stop));
}
After localstack.stop(); - our init method will never work because isRunning method returns always true even when docker doesn't have running containers (docker ps return empty list).
If Localstack object (unfortunately a static object) has non-null instance of localStackContainer - isRunning method return true response (with empty list of available ports underneath). Seems like stop method do not unset localStackContainer field?
Container.isRunning method:
try {
new PortCommand(containerId).execute();
return true;
} catch(Exception e) {
return false;
}
Could you allow to unset localStackContainer field or just unset this instance inner stop method? We just want to find out (using isRunning method) that docker image is running or not to avoid unnecessary Localstack restart between single test (using DirtiesContext annotation).
This will be unit test for this fix:
localstack.start();
localstack.stop();
assertFalse(localstack.isRunning());
Could you upload following changes:
In logic: cloud.localstack.Locastack:
public void stop() {
if (localStackContainer != null) {
localStackContainer.stop();
localStackContainer = null;
}
locked = false;
}
Unit test in cloud.localstack.dockerLocalstackDockerTest:
@Test
public void restart() {
Localstack.INSTANCE.startup(DOCKER_CONFIG);
Localstack.INSTANCE.stop();
assertFalse(Localstack.INSTANCE.isRunning());
}
@whummer
Thanks for reporting @wojciechszymski , and apologies for the long delay. This is potentially related to #82 . We believe that this should be fixed in the meantime - a new version 0.2.20 has been pushed to Maven Central. Can you please give it a try with that version? Please keep us posted if the problem persists.. Thanks!
Hi! We just wanted to follow up on our last message to see whether your issue has been resolved. Were you able to get it working with the latest version of LocalStack? We would appreciate your feedback!
|
2025-04-01T06:39:26.964877
| 2018-09-05T12:53:32
|
357215137
|
{
"authors": [
"elahrvivaz"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7926",
"repo": "locationtech/geomesa",
"url": "https://github.com/locationtech/geomesa/pull/2052"
}
|
gharchive/pull-request
|
GEOMESA-2386 Adding WPS module to FileSystem gs-plugin
Signed-off-by: Emilio Lahr-Vivaz<EMAIL_ADDRESS>
I haven't been able to re-create the original issue, but bundles the jars needed for WPS in with the FSDS gs-plugin (where previously you had to also install the e.g. accumulo gs-plugin, or manually copy the correct jars)
|
2025-04-01T06:39:26.977819
| 2024-10-31T18:09:47
|
2627522250
|
{
"authors": [
"Parth",
"tvanderstad"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7927",
"repo": "lockbook/lockbook",
"url": "https://github.com/lockbook/lockbook/issues/3050"
}
|
gharchive/issue
|
link opening without clicking it
Here I'm trying to append to the bullet point "door opening procedure" I want to press enter and ultimately write "fixed". So I click on the blank space after, we can discuss the capture behavior here but opening this link is def not my intention and I didn't tap the link.
https://github.com/user-attachments/assets/6a0e8b30-cbf7-48ef-b4a6-6e447496065e
@Parth are you still able to produce this? I'm unable to reproduce so far
|
2025-04-01T06:39:26.982048
| 2023-06-22T17:28:45
|
1770116368
|
{
"authors": [
"CoffeeVampir3",
"Eric-mingjie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7928",
"repo": "locuslab/wanda",
"url": "https://github.com/locuslab/wanda/pull/3"
}
|
gharchive/pull-request
|
Added model saving
It looks like the code to save the models was missing, I've added these three lines to main.py:
if args.save:
model.save_pretrained(args.save)
tokenizer.save_pretrained(args.save)
Thanks for the interest in our work! I have updated the repository to add support for this feature, which used a separate argument --save_model to allow custom demand of saving pruned models.
|
2025-04-01T06:39:26.985573
| 2023-11-10T11:26:23
|
1987443906
|
{
"authors": [
"cyberw",
"luis-allan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7929",
"repo": "locustio/locust",
"url": "https://github.com/locustio/locust/issues/2457"
}
|
gharchive/issue
|
SocketIOUser not support send BINARY data
Prerequisites
[X] I am using the latest version of Locust
[X] I am suggesting a new feature, not asking a question
Description
in this function, it prevent user to send data in OPCODE_BINARY, so I suggest to export websocket 'opcode' param to user as bellow。
# def send(self, body, name=None, context={}, opcode=websocket.ABNF.OPCODE_TEXT)
def send(self, body, name=None, context={}):
if not name:
if body == "2":
name = "2 heartbeat"
else:
# hoping this is a subscribe type message, try to detect name
m = re.search(r'(\d*)\["([a-z]*)"', body)
assert m is not None
code = m.group(1)
action = m.group(2)
url_part = re.search(r'"url": *"([^"]*)"', body)
assert url_part is not None
url = re.sub(r"/[0-9_]*/", "/:id/", url_part.group(1))
name = f"{code} {action} url: {url}"
self.environment.events.request.fire(
request_type="WSS",
name=name,
response_time=None,
response_length=len(body),
exception=None,
context={**self.context(), **context},
)
logging.debug(f"WSS: {body}")
# self.ws.send(body, opcode)
self.ws.send(body)
👍 PR welcome!
👍 PR welcome! (technically this issue should be in locust-plugins but its ok :)
a PR is created in locust-plugins.pr-151
Merged!
|
2025-04-01T06:39:26.988371
| 2021-01-19T19:32:14
|
789298501
|
{
"authors": [
"aek",
"cyberw"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7930",
"repo": "locustio/locust",
"url": "https://github.com/locustio/locust/pull/1678"
}
|
gharchive/pull-request
|
Feature chart sync
This PR improves the charts of the index and the report page with the following changes:
Prevent to continue reporting stats to the history of stats when the runner is stopped
Usage of shared template to build stats_history data to reload to charts data in the index.html and report.html templates
Fix Report Charts tooltips user count values
Fixes #1677
Awesome!
Awesome!
@cyberw Sorry, I need to add a new way to pass the data from python to the js without need to generate the instructions. I will make another PR
@cyberw Sorry, I need to add a new way to pass the data from python to the js without need to generate the instructions. I will make another PR
|
2025-04-01T06:39:26.992864
| 2024-11-12T23:12:21
|
2653588969
|
{
"authors": [
"DrJekyllH",
"lofi-enjoyer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7931",
"repo": "lofi-enjoyer/NubladaTowns",
"url": "https://github.com/lofi-enjoyer/NubladaTowns/issues/51"
}
|
gharchive/issue
|
[BUG] Can't add new role
After adding the first role, when second role is added, role-already-exists is displayed in the chat box, and the role cannot be added.
Incidentally, when checking the lectern at this point, the first role name is blank.
After restarting the server, the roles “permissions” and “players” appear on this screen.
In this state, after allowing role-editor-edit-manage-roles in permissions and assigning a player, the same operation cannot add a role.
I cannot reproduce it. Could you explain more specifically the steps to do it? Thanks!
What's the name of the role you want to create?
for example, I tried to name three times: "test", "税務官", and "aaaa".
I tried to name three times: "test", "税務官", and "aaaa".
On all three times, the server had been started by deleting the plugin data and installing the plugin each time.
Still cannot reproduce it. Both "test" and "aaaa" work fine, and "税務官" just gives the only-alphanumeric error and the role is not created. I'll take a deeper look into it and see if I find the issue.
In my environment, both CJK and alphabetic characters are logged as shown in the image.
Nothing is displayed in the console.
Oh, I found a conflict with a certain chat related plugin. It appears that the plugin is preventing the chat data from being passed to this plugin.
I will ask the author of the plugin that caused the problem.
|
2025-04-01T06:39:26.993882
| 2022-07-31T05:59:20
|
1323370718
|
{
"authors": [
"MoneyRBK",
"Random-User-34"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7932",
"repo": "lofi-enjoyer/TownyElections",
"url": "https://github.com/lofi-enjoyer/TownyElections/issues/6"
}
|
gharchive/issue
|
No way to disband parties?
I cannot seem to find a way to disband parties, as you cannot leave a party if you are the leader and there doesn't appear to be a party disband command
I am also having this issue.
|
2025-04-01T06:39:27.005274
| 2021-04-06T09:53:08
|
851257259
|
{
"authors": [
"jorgebay"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7933",
"repo": "logdna/logdna-agent-v2",
"url": "https://github.com/logdna/logdna-agent-v2/pull/129"
}
|
gharchive/pull-request
|
Use stable rbac.authorization.k8s.io/v1 API
RBAC mode is stable since k8s v1.8.
We should also update the helm charts.
oh, new helm charts already use rbac.authorization.k8s.io/v1 💪
|
2025-04-01T06:39:27.014617
| 2023-07-11T09:52:51
|
1798560871
|
{
"authors": [
"agazso"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7934",
"repo": "logos-innovation-lab/waku-objects-playground",
"url": "https://github.com/logos-innovation-lab/waku-objects-playground/issues/178"
}
|
gharchive/issue
|
Reloading standalone page fails with 500 error
When reloading the standalone page (e.g. /chat/{address}/object/{objectId}/new) a 500 Internal Error is reported. On the console log there is a message 'no wallet`.
Since this PR the problem can be traced back to the check for having wallet defined in src/lib/objects/ui.svelte. The problem is when the app is restarted the state stores are reinitialized and while they are in their initial phases (e.g. loading = true) their content is not available.
It would be better to create a generic mechanism for waiting for all the stores to be loaded, otherwise all the pages has to implement a check for their dependent stores to be loaded and display a loading screen, which makes it fragile against introducing bugs on reload.
|
2025-04-01T06:39:27.070169
| 2019-03-04T06:47:46
|
416655185
|
{
"authors": [
"sachaaaaa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7935",
"repo": "loki-project/loki-storage-server",
"url": "https://github.com/loki-project/loki-storage-server/pull/25"
}
|
gharchive/pull-request
|
Add --log-level flag and use boost trivial logger
Allows to specify --log-level trace/debug/info/warning/error/fatal optionally (default: info)
Ran clang-format so there are some noisy styling changes, sorry.
Please review the log level used for each message and if I could add more messages.
Resolves #22
|
2025-04-01T06:39:27.112524
| 2022-12-19T17:35:53
|
1503320114
|
{
"authors": [
"Jakub-CZ",
"brogel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7936",
"repo": "lolokraus/DegiroAPI",
"url": "https://github.com/lolokraus/DegiroAPI/issues/60"
}
|
gharchive/issue
|
not able to login anymore
I noticed today that I'm not able to login anymore. I get the error "You are not supposed to be here".
I tried login in on the website manually with the same login/password, working without a problem.
Is this something on my side or did DeGiro change something in the API?
Yeah, I guess 4 issues opened on that wasn't enough. We need more.
See #56
OMG, I feel stupid!
I'm new to GitHub and it searched only the 'open' issues. Thx for this quick heads up
|
2025-04-01T06:39:27.114961
| 2017-01-24T20:31:19
|
202931784
|
{
"authors": [
"jiminhsieh",
"lomigmegard"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7937",
"repo": "lomigmegard/akka-http-cors",
"url": "https://github.com/lomigmegard/akka-http-cors/issues/13"
}
|
gharchive/issue
|
Read CORS settings from configuration file
Use the same pattern as in akka.http.impl.settings.ServerSettingsImpl to load the settings from a .conf file.
Provide a reference.conf with default values.
Have you been work on this? I think I could try to implement it. :)
@jiminhsieh I already started working on this, but got caught on something else.
I will clean the code a bit and push it to a feature branch this way we can discuss how to finish it.
I just thought if you have not been working on this, I could try to help it. :)
Release version 0.3.0 with this improvement.
|
2025-04-01T06:39:27.118691
| 2015-11-20T16:11:51
|
118072943
|
{
"authors": [
"JoeShep",
"edubkendo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7938",
"repo": "lonelyplanet/rizzo-next",
"url": "https://github.com/lonelyplanet/rizzo-next/pull/221"
}
|
gharchive/pull-request
|
Destinations beta opt-out banner
Adds a banner to top of page that allows Destinations Next beta users to opt out and return to the original experience. Also has a 'close' button that will hide the banner and set a cookie to prevent showing the banner again.
|
2025-04-01T06:39:27.141414
| 2021-07-30T00:07:30
|
956291350
|
{
"authors": [
"joshimoo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7943",
"repo": "longhorn/longhorn-engine",
"url": "https://github.com/longhorn/longhorn-engine/pull/646"
}
|
gharchive/pull-request
|
2818 further socket usage optimizations
Lazily initialize the replica clients for replica/sync service
This is to optimize the connection count for the longhorn-manager engine
monitor loop, in a single loop the monitor executes the engine binary 8
times this leads to a total connection count of f() = 8e + 6 * 2r = 20
for a volume with 1 replica.
Since all the calls the monitor makes only require either the replica or
the sync client, we can reduce this further with this optimization to
g() = 8e + 6r = 14 per monitor loop.
The monitor loop executes 60/5 = 12 times per minute, so our total
connections per minute are reduced from 240 to 168. For volume with 3
replicas we end up with 312 connection per minute.
No further optimization on this end is possible, the next required task
is removal of the direct engine binary invocations by the longhorn-manager.
This will reduce the connection counts to h() = 1e + 2*r which as one
can see would lead to the desired behavior for each volume of 1 engine connection
and 2 connections per replica (replica / sync).
longhorn/longhorn#2818
Signed-off-by: Joshua Moody<EMAIL_ADDRESS>
Good to review, let me merge afterwards.
|
2025-04-01T06:39:27.146622
| 2024-12-06T00:06:55
|
2721724650
|
{
"authors": [
"c3y1huang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7944",
"repo": "longhorn/longhorn-tests",
"url": "https://github.com/longhorn/longhorn-tests/pull/2178"
}
|
gharchive/pull-request
|
chore(robot): test pvc expand more than storage maximum size
Which issue(s) this PR fixes:
Issue longhorn/longhorn#6633
What this PR does / why we need it:
Add a robot test case to verify that a PVC cannot be expanded beyond the storage maximum size.
Special notes for your reviewer:
None
Additional documentation or context
None
Summary by CodeRabbit
Release Notes
New Features
Enhanced persistent volume claim creation with flexible configuration options.
New keyword for verifying persistent volume claim requested size over time.
New methods for volume size retrieval and maximum disk storage checks.
Added functionality for expanding workloads and persistent volume claims with improved size management.
Bug Fixes
Improved error handling and logging in backup and AWS operations.
Tests
Introduced a new test suite for validating persistent volume claim behavior, including checks for expansion limits.
@coderabbitai review
https://github.com/coderabbitai review
|
2025-04-01T06:39:27.154136
| 2020-05-05T04:37:33
|
612331611
|
{
"authors": [
"boknowswiki",
"meldafrawi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7945",
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/1293"
}
|
gharchive/issue
|
test: Add test case for multiple paths on the same filesystem in the node default disk config annotation
This test case is for if there are two or more disk paths in the default disk config annotation cat not in the same filesystem in the node.
Test steps:
Make a clean condition: no disk, no tag, no default disk related annotation.
Create default disk annotation with two disks different paths, but in the same filesystem.
Enable "Setting/General/Create Default Disk on Labeled Nodes".
Wait for node update, and check there should be no disk and no tag created.
Cleanup test environment: remove default disk related annotation
Verified the test case, the got the expected error message:
[longhorn-manager-t87z5] time="2020-05-05T01:56:25Z" level=warning msg=" [{"path":"/root","allowScheduling":false,"storageReserved":1024,"name":"root-name"},{"path":"/var/lib/longhorn/","allowScheduling":false,"storageReserved": 1024,"name":"default-name"}]"
[longhorn-manager-t87z5] time="2020-05-05T01:56:25Z" level=warning msg="Kubernetes node: invalid annotation node.longhorn.io/default-disks-config: config: the disk /var/lib/longhorn/ is the samefile system with /root, fsid 58fe937c58377e45"
Verified with local build longhorn-manager:
[longhorn-manager-t87z5] time="2020-05-05T16:56:21Z" level=warning msg="[{"path":"/root","allowScheduling":false,"storageReserved":1024,"name":"root-name"},{"path":"/var/lib/longhorn/","allowScheduling":false,"storageReserved": 1024,"name":"default-name"}]"
[longhorn-manager-t87z5] time="2020-05-05T16:56:21Z" level=warning msg="Kubernetes node: invalid annotation node.longhorn.io/default-disks-config: config: the disk /var/lib/longhorn/ is the samefile system with /root, fsid 58fe937c58377e45"
test_node_config_annotation_invalid passed for two consecutive runs. longhorn-tests/421 & longhorn-tests/422
|
2025-04-01T06:39:27.155743
| 2019-11-18T17:53:00
|
524522179
|
{
"authors": [
"yasker"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7946",
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/898"
}
|
gharchive/issue
|
Update the CSI driver list
https://kubernetes-csi.github.io/docs/drivers.html
Docs PR at https://github.com/kubernetes-csi/docs/pull/228
PR merged. Done.
|
2025-04-01T06:39:27.173234
| 2024-03-28T04:40:44
|
2212335133
|
{
"authors": [
"coveralls",
"lonnieezell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7947",
"repo": "lonnieezell/forum-example",
"url": "https://github.com/lonnieezell/forum-example/pull/317"
}
|
gharchive/pull-request
|
feat(app): Implement the trust level restrictions on starting a new discussion
Added policy for creating a discussion
check the policy around the Start a Discussion button
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
Details
4 of 4 (100.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.09%) to 79.15%
Totals
Change from base Build<PHONE_NUMBER>:
0.09%
Covered Lines:
2532
Relevant Lines:
3199
💛 - Coveralls
|
2025-04-01T06:39:27.199951
| 2021-11-15T09:15:19
|
1053395581
|
{
"authors": [
"neuhausj",
"titulebolide"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7948",
"repo": "lorcalhost/BTB-manager-telegram",
"url": "https://github.com/lorcalhost/BTB-manager-telegram/pull/149"
}
|
gharchive/pull-request
|
Delete paper_wallet when removing DB
Backup the paper_wallet + remove the file if it exists.
@all-contributors please add @neuhausj for code
|
2025-04-01T06:39:27.232741
| 2023-07-30T16:39:50
|
1827996815
|
{
"authors": [
"akiozihao",
"yanxiaoqi932"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7949",
"repo": "lotusdblabs/lotusdb",
"url": "https://github.com/lotusdblabs/lotusdb/pull/87"
}
|
gharchive/pull-request
|
Fix #85
The issues here are related to those discussed in the previous meeting. If deletion is triggered before synchronization with the index occurs, the deletion will be lost. Would it be better to directly add a deletion record without checking if the index exists in this case?
Yes, I think you are right. There are mainly the following two points:
The main problem is that when the data need to be deleted is stored in memTable before flush, neither in batch.pendingWrites nor db.index, the deletion will be lost, so add a deletion record without checking directly can guarantee deletion will be lost;
We are not need to read bptree before deleting each entry.
|
2025-04-01T06:39:27.249888
| 2016-04-12T12:00:22
|
147728416
|
{
"authors": [
"crstffr",
"louischatriot",
"simon-p-r",
"zevero"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7950",
"repo": "louischatriot/nedb",
"url": "https://github.com/louischatriot/nedb/issues/406"
}
|
gharchive/issue
|
Bugreport
Hi
I have encountered a problem with nedb creating duplicate records using the same _id, this is a gist I have created to show bug.
https://gist.github.com/simon-p-r/f043d8885115549a90d327a34c87cf1a
OS is Windows 8.1
Node version is 5.10.1
Thanks
Simon
Works fine on my machine, and I don't see any problem with the code ... What does your script output? Thanks for the nice bug report format in any case!
I too am seeing duplicate entries in the database when doing update/patch calls. Though my setup is quite a bit more complex (using FeathersJS), my results are nearly identical to Simon's.
OSX v10.10.5
Node v4.4.3
NeDB v1.8.0
As per the readme and numerous issues priori to that, this is the expected behavior as nedb persistence uses an append only file for performance purposes. Thanks for the nice bug report format though!
Thanks for the quick response. I see now that it is intentional behavior. Setting db.persistence.setAutocompactionInterval(interval) did the trick for me.
Cheers,
Chris
normally there is no need to setAutocompactionInterval
Duplicates are intentionally ignored and autocompacted at next start.
|
2025-04-01T06:39:27.255877
| 2023-08-12T16:25:25
|
1848105240
|
{
"authors": [
"CommanderStorm",
"ale82x"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7951",
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/issues/3567"
}
|
gharchive/issue
|
sort monitor before DOWN then alphabetical order
⚠️ Please verify that this feature request has NOT been suggested before.
[X] I checked and didn't find similar feature request
🏷️ Feature Request Type
UI Feature
🔖 Feature description
for me is more a clean interface is if a monitor is down is displayed first.
i have some monitor, and sometimes a monitor still down for days and is more clean if displayed first
✔️ Solution
create a change of sorting...
❓ Alternatives
alternatives is show only monitor DOWN on list ....
📝 Additional Context
thank you
This issue is likely resolved in 1.23.0-beta.1 as https://github.com/louislam/uptime-kuma/pull/3312 and https://github.com/louislam/uptime-kuma/pull/3469 were merged.
Please refer to the beta at https://github.com/louislam/uptime-kuma/releases/tag/1.23.0-beta.1.
⇒ Could you close this issue as it is resolved or comment on why it is not? ^^
PS:
For the future, please do run a duplication search, as otherwise managing this number of issues is quite bad, see https://github.com/louislam/uptime-kuma/issues?q=is%3Aissue+sort+down ⇒ https://github.com/louislam/uptime-kuma/issues/1585 or other issues
sorry, i searched but probably not that deep...
thank you
@ale82x I think you forgot to close this issue, right?
Could you close this issue as it is resolved or comment on why it is not? ^^
|
2025-04-01T06:39:27.261918
| 2024-03-05T11:56:21
|
2169012919
|
{
"authors": [
"CommanderStorm",
"Vanieltk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7952",
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/issues/4553"
}
|
gharchive/issue
|
monitoring giving socket hang up
⚠️ Please verify that this question has NOT been raised before.
[X] I checked and didn't find similar issue
🛡️ Security Policy
[X] I agree to have read this project Security Policy
📝 Describe your problem
Hello dear ones!
I have a problem with my kuma uptime, and I can't find a solution.
Just from yesterday, monitoring started to give this socket hang up error, in a very large and frequent way, in all monitoring.
This error had been occurring for some time, but very sporadically and very little, almost imperceptibly, but yesterday it started to notify much more frequently and in all monitoring, in an interspersed way, it gives the warning, then it stays up, and then notify again.
I've done everything, changed the version, cleaned the base, stopped the container and uploaded it again, but nothing resolved it, remembering that there was no type of change on the environment side.
Please thank anyone who can help me.
📝 Error Message(s) or Log
🐻 Uptime-Kuma Version
1.23.11
💻 Operating System and Arch
Container uptime/kuma
🌐 Browser
Google Chrome
🖥️ Deployment Environment
Runtime: K8S, EKS 1.27
Database: sqllite
Filesystem used to store the database on: EBS GP2
number of monitors: 120
what is your database size and retention set to (just as a precaution, not likely related)
Could you have a look at https://github.com/louislam/uptime-kuma/wiki/Troubleshooting and see if you can reproduce this in a shell to give more context?
I managed to solve the problem, it really was something within our network, which we ended up discovering.
|
2025-04-01T06:39:27.265940
| 2023-07-28T12:26:03
|
1826336801
|
{
"authors": [
"Zartexo",
"louisnw01"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7953",
"repo": "louisnw01/lightweight-charts-python",
"url": "https://github.com/louisnw01/lightweight-charts-python/issues/52"
}
|
gharchive/issue
|
Callbacks: accessing subcharts and setting switcher option
Hello,
i'm trying to implement show_async to have a more flexible chart. I am using the Callbacks example.
I'm wondering about two things.
1.) Accessing subcharts
When using the Callbacks example I want to create a chart including subcharts. I am struggling to access the subcharts (including their lines) after creating them in the main function. What is the best way to access them?
2.) Setting the switcher widget current option programmatically
In the example the topbar text is set
self.chart.topbar['symbol'].set(searched_string)
I am wondering if I can do the same for a switcher widget? I couldn't find any similiar way to change the current option programmatically
BR
Hey
In your API callback class, the attribute chart will be dynamically updated to the chart or subchart that was responsible for emitting the callback.
There is no way to programatically change a switcher after the chart has been loaded, however you can set the inital value of the switcher when defining it.
Thank you @louisnw01 for the reply. It makes sense to have it like that.
In my case I am using the switcher widget to move my "chart area" one day further. It's not working as intended because it can only be triggered once.
Ah I see. So I think you want to be able to click a switcher more than once?
Perhaps a seperate 'button' widget may be of benefit.
Yes, basicly I want to use it as a button. Another widget of that kind would be perfect.
|
2025-04-01T06:39:27.286777
| 2019-11-22T10:38:40
|
527118951
|
{
"authors": [
"lovell",
"templth"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7954",
"repo": "lovell/sharp-libvips",
"url": "https://github.com/lovell/sharp-libvips/issues/29"
}
|
gharchive/issue
|
Problem to use sharp / libvips on Docker / Dokku
Hello,
I'm trying to use the sharp module within a nodejs application and it works fine locally on Ubuntu. Now I want to deploy this application on our server as a dokku application. This application relies on a Dockerfile that initializes elements:
FROM node:8.9.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 5000
CMD [ "npm", "start" ]
During the execution of the npm install task, I can see traces regarding sharp and things seem successful:
><EMAIL_ADDRESS>install /usr/src/app/node_modules/sharp
> (node install/libvips && node install/dll-copy && prebuild-install) || (node-gyp rebuild && node install/dll-copy)
info sharp Downloading https://github.com/lovell/sharp-libvips/releases/download/v8.8.1/libvips-8.8.1-linux-x64.tar.gz
However, when I'm trying to use the endpoint relying on sharp, the application crashes.
I guess that some required libraries are missing because it's a minimal distribution within the docker image but can't find what to add.
Thanks for your help!
Thierry
Is /usr/src/app/node_modules being overwritten by the COPY . /usr/src/app command?
Thanks very much for your answer!
For testing, I changed the Dockerfil with this:
FROM node:8.9.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY . /usr/src/app/
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
and I have the same problem!
I can see that the file libvips-8.8.1-linux-x64.tar.gz is still downloaded...
Does upgrading the version of Node.js help (v8.9.0 is over 2 years old)? What is the output of RUN npm install --verbose within the container?
This works with Node 10!
Thanks very much for your help!
|
2025-04-01T06:39:27.292547
| 2021-05-12T00:13:15
|
889258293
|
{
"authors": [
"cyrfer",
"lovell"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7955",
"repo": "lovell/sharp",
"url": "https://github.com/lovell/sharp/issues/2711"
}
|
gharchive/issue
|
optimal way to omit alpha channel in output, SVG to PNG
What are you trying to achieve?
I want to convert SVG to PNG without an alpha channel in the result, as well as get an optimized rendering of the text.
I am successfully converting SVG to PNG files with code shown below. The problem is the result contains 4 channels, RGBA, rather than what I expect, which is 3 channels RGB.
The SVG will always be 2 tones, white text on black background.
What is the best way to omit the alpha channel in the output?
Can I describe the SVG input in a way that avoids the transparency?
Is .resize() the best way to render desired sizes? What is the impact on the text aliasing?
Have you searched for similar questions?
Yes, it seems I can use operations like .removeAlpha() to prevent the output. Can this be optimized more?
Why is alpha added? Is the text causing the transparency?
Could I create a blank canvas of desired size and channels, and then composite the SVG onto it?
Are you able to provide a minimal, standalone code sample that demonstrates this question?
const fs = require('fs')
const sharp = require('sharp')
const metadata = {
format: 'png',
width: 1920,
height: 1080,
}
const source = "<svg id=\"preview\" version=\"1.1\" baseProfile=\"full\" viewBox=\"0 0 1920 1080\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"> <style type=\"text/css\"> text { font-family: \"Roboto\", sans-serif } </style> <rect width=\"100%\" height=\"100%\" fill=\"black\" /> <text x=\"930\" y=\"370\" font-size=\"20\" text-anchor=\"end\" fill=\"white\">HELLO</text> <text x=\"990\" y=\"370\" font-size=\"20\" text-anchor=\"start\" fill=\"white\">WORLD</text> </svg>"
const bufferPromise = sharp(Buffer.from(source))
//.removeAlpha()
.resize(metadata.width, metadata.height)
.toFormat(metadata.format)
.toBuffer()
bufferPromise.then(buffer => {
fs.writeFileSync(`./out.${metadata.format}`, buffer)
})
Are you able to provide a sample image that helps explain the question?
See attached.
I check the channels using magick identify -verbose out.png
SVG rendering is via librsvg, is always 4-channel RGBA and you are correct to use removeAlpha to reduce this to RGB output.
(32bpp RGBA is usually considered more optimal than 24bpp RGB for image processing as it can allow for the use of memory-aligned SIMD instructions.)
In terms of the use of resize vs density / viewBox, it depends on the image so you'll probably need to experiment for a given set of inputs.
I hope this information helped. Please feel free to re-open with more details if further assistance is required.
|
2025-04-01T06:39:27.302414
| 2021-05-13T16:30:45
|
891187568
|
{
"authors": [
"MaxMls",
"lovell"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7956",
"repo": "lovell/sharp",
"url": "https://github.com/lovell/sharp/issues/2714"
}
|
gharchive/issue
|
After resizing animation, broken image returns
Are you using the latest version? Is the version currently in use as reported by npm ls sharp the same as the latest version as reported by npm view sharp dist-tags.latest?
yes
What is the expected behaviour?
correct output
Are you able to provide a minimal, standalone code sample, without other dependencies, that demonstrates this problem?
https://codesandbox.io/s/2-drk7e?file=/src/server.js
Are you able to provide a sample image that helps explain the problem?
any animated image (gif, webp, apng)
What is the output of running npx envinfo --binaries --system?
System:
OS: Linux 5.4 Debian GNU/Linux 10 (buster) 10 (buster)
CPU: (16) x64 Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Memory: 7.61 GB / 62.73 GB
Container: Yes
Shell: 5.0.3 - /bin/bash
Binaries:
Node: 14.16.1 - ~/.nvm/versions/node/v14.16.1/bin/node
Yarn: 1.22.10 - ~/.nvm/versions/node/v14.16.1/bin/yarn
npm: 6.14.12 - ~/.nvm/versions/node/v14.16.1/bin/npm
As you've seen, you'll need to update the output with the new page height when resizing a multi-page image.
.webp({ pageHeight: ... })
https://sharp.pixelplumbing.com/api-output#webp
Please see #2275 for a future possible enhancement that relates to this.
@lovell, Thank you for your reply. I added this parameter, but it doesn't work in all cases. There are 2 examples in the sandbox where everything works and 2 examples where the image breaks. I am unable to establish the reason for this. The only thing I noticed is that the size of the output buffer is very different from the working example.
The output looks as-expected to me. Example 1 is using fit=fill, which ignores aspect ratio, and is reducing the width but keeping the height the same.
https://sharp.pixelplumbing.com/api-resize
@lovell, I have simplified the example for clarity. https://codesandbox.io/s/2-drk7e?file=/src/sharp/examples.js
You can see that at values of height from 47 pixels to 56 pixels, the image breaks, but before and after all the images are normal.
I used one code for all pictures.
const width = 100
const s = sharp(imageBuf, { animated: true });
const metadata = await s.metadata();
const height = pageHeight * metadata.pages; // calculate the sum of heights
const result = await s
.resize({ width, height, fit: "fill" })
.webp({ pageHeight })
.toBuffer();
This is as it should be, is it my mistake or a mistake in the library?
How can I resize the animated picture to {height: 56, width: 100}, to keep the original aspect ratio?
Please can you try setting the fastShrinkOnLoad option to false
.resize({ width, height, fit: "fill", fastShrinkOnLoad: false })
https://sharp.pixelplumbing.com/api-resize
@lovell, It works, thanks
Thanks for confirming. Commit https://github.com/lovell/sharp/commit/5bd5e5052ad53c67c89c930c4eacd1e5fa916280 makes this the default behaviour for animated WebP images. I'll re-open this issue until it's released.
v0.28.3 now available.
|
2025-04-01T06:39:27.306355
| 2024-02-13T01:23:20
|
2131298124
|
{
"authors": [
"GcodeG01",
"lovell"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7957",
"repo": "lovell/sharp",
"url": "https://github.com/lovell/sharp/issues/3991"
}
|
gharchive/issue
|
Why does Tile skip folder numbers the higher the zoom?
I'm trying to use Sharp to tile an uploaded image for leaflet maps. I'm pretty close, but I found that leaflet is trying to grab say image url 4/0/0, and it's a failed get request. Taking a look at the tiled images sharp created and I found that zoom folder 4 skips the y coordinate folders 0 and 1, and start from 2. Zoom folder 5 skips 0 to 3, and start from 4. Besides the get request errors I'm getting, leaflet looks to work just fine. I'm just wondering if it's intentional? If so, I would like to know why and if I can do anything about it to just get rid of the request errors?
await sharp(filePath)
.composite([{ input: Buffer.from('<svg><rect x="0" y="0" width="256" height="256" style="fill:rgb(0,0,0);fill-opacity:0.0;"></rect></svg>') }]) // add transparent background
.png() // convert to png so transparency is preserved
.tile({
size: 256,
background: { r: 256, g: 256, b: 256, alpha: 0 },
center: true,
basename: 'tiles',
container: 'fs',
layout: 'google'
})
.toFile(`uploads/${key}/map.png`)
Here's an example of the folders sharp creates
[0]
[1]
[2]
[3]
[4]
[2]
[3]
...
[5]
[4]
[5]
...
Looking more into it, it's because sharp automatically removes blank tiles and that is why sometimes the couple of first Y and now realizing a couple of the last Y coordinates are not created (Double so, because I have the tile centered). I store my tiled files into AWS S3, so I'm not sure if it's better to just have get errors than to have a bunch of empty tiles stored in S3, but that's a problem for me.
The "google" layout changes the default skipBlanks from -1 to 5 to match the behaviour of libvips.
https://www.libvips.org/API/current/VipsForeignSave.html#vips-dzsave
However this was undocumented in sharp, sorry, and I have just updated this via commit https://github.com/lovell/sharp/commit/bc95531f2dcd4e6eb2b207016a390fb066b4461a
Okay, thanks!
|
2025-04-01T06:39:27.451735
| 2021-04-16T01:28:24
|
859376251
|
{
"authors": [
"msfschaffner"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7958",
"repo": "lowRISC/ibex",
"url": "https://github.com/lowRISC/ibex/pull/1341"
}
|
gharchive/pull-request
|
[lockstep] Introduce optimization barrier around lockstep Ibex
Certain synthesis tools like DC are very smart at optimizing away redundant logic.
Hence, we have to insert an optimization barrier at the IOs of the lockstep Ibex.
This is achieved by manually buffering each bit using prim_buf.
Our Xilinx and DC synthesis flows make sure that these buffers cannot be optimized
away using keep attributes (Vivado) and size_only constraints (DC).
Signed-off-by: Michael Schaffner<EMAIL_ADDRESS>
Could you integrate this and vendor it back into the OT repo? Thanks!
|
2025-04-01T06:39:27.453048
| 2022-02-25T06:33:49
|
1150095426
|
{
"authors": [
"weicaiyang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7959",
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/11103"
}
|
gharchive/pull-request
|
[dv/doc] Document security verification for memory integrity
I missed this one last time. Now add it.
Signed-off-by: Weicai Yang<EMAIL_ADDRESS>
@rswarbrick thank you so much for correcting the grammars. Fixed all of them.
|
2025-04-01T06:39:27.455854
| 2022-03-29T16:23:59
|
1185117495
|
{
"authors": [
"tjaychen",
"vogelpi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7960",
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/11771"
}
|
gharchive/pull-request
|
[aes] Fix clearing of data input registers without inferring combo loop
Previously, the write enable for the data input registers was set for two clock cycles when clearing the registers. This caused the data_in_qe_i signals used for status tracking to be high during the first clock cycle when back in IDLE. As a result, the AES unit would immediately start when running in automatic operation.
This is a second version of the fix that doesn't infer a combo loop by splitting the clearing operation into two distinct states: First CLEAR_I clears input registers such as Initial Key, IV and input data registers. Then CLEAR_CO waits for the cipher core, clears the trigger bits and if selected also clear the output data registers.
This is related to lowRISC/OpenTitan#11431 and lowRISC/OpenTitan#11758.
This fixes #11431.
@tjaychen would you mind running AscentLint over this? Locally it doesn't seem to infer combo loops on the FPGA anymore, but I would like to be 100% sure before merging.
sorry a bit belated. I pulled to head of tree and did a run, did not see the issue anymore.
sorry a bit belated. I pulled to head of tree and did a run, did not see the issue anymore.
Thanks @tjaychen for taking a look and the feedback!
|
2025-04-01T06:39:27.458452
| 2022-06-17T19:27:10
|
1275386435
|
{
"authors": [
"a-will",
"tjaychen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7961",
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/13287"
}
|
gharchive/pull-request
|
[usbdev/dif] Move DIF to S2
[RTL] Change pkt_sent interrupt to a ganged-status type instead of a pulsed-event type.
Adjust the TX status function to break up checking for sent packets and clearing status, since the function that checks the status dispatches handling to other, endpoint-specific functions.
Rebase the USB stack on top of the DIFs.
Remove the control endpoint's expression of support for remote wake (not actually supported by the IP).
Move remaining tests over to the DIFs.
should we merge this one first? or do you prefer to do it as part of #13371?
should we merge this one first? or do you prefer to do it as part of #13371?
For me, the ordering doesn't matter. I pulled the RTL change into #13371 in case the software review has a substantially longer delay than the hardware review.
|
2025-04-01T06:39:27.459980
| 2022-11-03T00:11:56
|
1433939077
|
{
"authors": [
"sriyerg",
"timothytrippel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7962",
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/15959"
}
|
gharchive/pull-request
|
[chip testplan] Fix chip level testplan
Fix the mis-mapped chip_sw_spi_device_tpm test.
Expand on example tests.
Signed-off-by: Srikrishna Iyer<EMAIL_ADDRESS>
Thanks for cleaning this up @sriyerg !
|
2025-04-01T06:39:27.466780
| 2020-12-03T01:36:32
|
755740246
|
{
"authors": [
"cindychip",
"msfschaffner"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7963",
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/4386"
}
|
gharchive/pull-request
|
[lc_ctrl/otp_ctrl/doc] Documentation updates
Updates include:
LC:
[x] Corrections to the programmers guide due to the multibit mutex
[x] Update block diagram
[x] Document signals and interfaces
[x] Update documentation of revised life cycle access control signals (#4504)
[x] Add main FSM diagram
- [ ] Add system integration diagram (will add that in a subsequent PR)
OTP:
[x] Explicit documentation of access granularity of all OTP items
[x] DAI FSM diagram update
[x] Blockdiagram update
Hey Michael, one small thing could you also update the LC spec here: https://docs.opentitan.org/hw/ip/lc_ctrl/doc/index.html#programmers-guide
Point 3 says: "Claim exclusive access to the transition interface by writing 1 to the CLAIM_TRANSITION_IF register,"
It should be writing 'hA5 right? Do you mind updating that also?
Hey Michael, one small thing could you also update the LC spec here: https://docs.opentitan.org/hw/ip/lc_ctrl/doc/index.html#programmers-guide
Point 3 says: "Claim exclusive access to the transition interface by writing 1 to the CLAIM_TRANSITION_IF register,"
It should be writing 'hA5 right? Do you mind updating that also?
Yeah that's right, thanks for catching that. I'll amend this part.
Ok this documentation update is mostly final now.
There is another system integration diagram for life cycle which is not quite finished yet.
I will add that in a subsequent PR.
Thanks a lot for the detailed review, @tjaychen.
Amended and rebased.
|
2025-04-01T06:39:27.468458
| 2021-10-08T22:04:10
|
1021500903
|
{
"authors": [
"msfschaffner",
"tjaychen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7964",
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/8589"
}
|
gharchive/pull-request
|
[reggen] Fixes and mubi introduction
This primarily ensures reset values are consistent
Fixes #8521
Fixes #7566
Tracking issue for mubi conversions: https://github.com/lowRISC/opentitan/issues/8347
|
2025-04-01T06:39:27.480275
| 2024-06-21T01:43:42
|
2365536296
|
{
"authors": [
"inhogog2",
"vincentmli"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7965",
"repo": "loxilb-io/loxilb",
"url": "https://github.com/loxilb-io/loxilb/issues/706"
}
|
gharchive/issue
|
Add path flag to loxicmd save to non default /etc/loxilb directory
Is your feature request related to a problem? Please describe.
BPFire UI is able to configure loxilb lb, fw, ip. but unable to save the configuration, and if loxilb restart, the configuration configured from UI is lost.
from BPFire web UI to invoke loxicmd save -a, loxicmd save -a failed to save to /etc/loxilb because UI does not have permission to write to /etc/loxilb directory.
Describe the solution you'd like
add path flag to loxicmd save to save to non default /etc/loxilb directory so other non root users like UI user could execute loxicmd save -a -p /var/ipfire/loxilb to save the config to /var/ipfire/loxilb directory. it would be nice if loxilb restart could restore the config from non default /etc/loxilb directory also like /var/ipfire/loxilb
Describe alternatives you've considered
Additional context
Hi @vincentmli,
This issue has been updated.
You can specify the saving path using the -c option in the loxicmd, and to load it from that path, you will use the --config-path option when running loxilb. Please check it out.
loxicmd save -a -c **/root/**
IP Configuration saved in ipconfig_2024-06-28_06:43:08.txt
/usr/bin/bash -c cp -R lbconfig_2024-06-28_06:43:08.txt /root/lbconfig.txt
.....
./loxilb --config-path **/root/**
@inhogog2 thanks, I will test in BPFire and let you know the result
@inhogog2 I tested the feature and it works perfectly from loxicmd command line, but I still run into issue when calling loxicmd from WebUI Perl CGI program with user nobody https://github.com/vincentmli/BPFire/issues/30, this is not related to loxicmd though, something in the OS user permission level
|
2025-04-01T06:39:27.552761
| 2018-02-12T01:26:41
|
296246467
|
{
"authors": [
"Silentbob101",
"dessmith",
"lprhodes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7966",
"repo": "lprhodes/homebridge-broadlink-rm",
"url": "https://github.com/lprhodes/homebridge-broadlink-rm/issues/246"
}
|
gharchive/issue
|
Unable to install due to dependency<EMAIL_ADDRESS>
Hi,
If I run npm install -g homebridge-broadlink-rm / with or without sudo I get
npm ERR! code ETARGET
npm ERR! notarget No matching version found for<EMAIL_ADDRESS>npm ERR! notarget In most cases you or one of your dependencies are requesting
npm ERR! notarget a package version that doesn't exist.
npm ERR! notarget
npm ERR! notarget It was specified as a dependency of 'homebridge-broadlink-rm'
If I check https://www.npmjs.com/package/broadlinkjs-rm it only shows version 0.2.2 is there some delay, or some command I can run to update the repo so that it can download 0.2.4. Even if I install broadlinkjs-rm directly with npm it pulls 0.2.2
Thanks
Having same issue here
Sorry, published it.
|
2025-04-01T06:39:27.558648
| 2024-03-09T01:19:55
|
2176988706
|
{
"authors": [
"david-dick",
"lraj22"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7967",
"repo": "lraj22/browserfeatcl",
"url": "https://github.com/lraj22/browserfeatcl/pull/1"
}
|
gharchive/pull-request
|
Removing alerts
It would be nice to be able to display errors when popups are disabled. What tool are you using to minify your javascript? It would be good to see some documentation for that.
Your pull request seems good. I am busy currently and still need to do a bit of review, but this will hopefully be merged soon. Thank you for noticing this!
@david-dick I've made one change, which is to use var instead of let. The intent is to work on as many browser versions as possible. Regarding the minifier tool, I used https://www.toptal.com/developers/javascript-minifier . Nothing special about that specific tool, it's just the first one that showed up.
Let me know if this is good to commit. Thanks!
works for me!
Hey @david-dick, I appreciate the PR a lot, you're the first person to ever write any PR or issue on any of my repos... thank you!! As for the PR itself, it's been merged. 👍
|
2025-04-01T06:39:27.562769
| 2018-10-10T08:11:18
|
368542552
|
{
"authors": [
"jjmartres",
"lrills",
"walkafwalka"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7968",
"repo": "lrills/helm-unittest",
"url": "https://github.com/lrills/helm-unittest/issues/60"
}
|
gharchive/issue
|
Add sha256sum for release file
Please provide SHA256 checksums for your releases
Hey @jjmartres, I am trying to revive this project since the maintainer seems to be gone. I also tried contacting him separately, but no response. If he comes back, I plan to submit any changes I make on my back to this one.
For your particular request, I added checksums with this MR here and release of 0.2.0.
Close due to archiving repository.
Sorry for not presenting so long. I've been working on another project and don't have time to for helm-unittest.
Please consider other working forks like quintush/helm-unittest.
|
2025-04-01T06:39:27.580186
| 2021-07-05T08:56:47
|
936858008
|
{
"authors": [
"N0W0RK",
"ge65cer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7969",
"repo": "ls1intum/Artemis",
"url": "https://github.com/ls1intum/Artemis/issues/3680"
}
|
gharchive/issue
|
Remove page scroll on save
Is your feature request related to a problem?
When working on modelling exercises, my progress gets periodically saved. The saving is indicated by a green bar on top of the modelling window. When the bar is shown, the page moves down by the width of the bar. This moves the modelling canvas and leads to frustration when connecting elements.
Describe the solution you'd like
The saving could either be indicated by an element that does not increase page space above the modelling window or the space for the save bar is already allocated and the bar is just made visible.
Describe alternatives you've considered
No response
Additional context
This issue has been addressed.
|
2025-04-01T06:39:27.586570
| 2020-07-17T20:53:46
|
659631926
|
{
"authors": [
"TobiasPr",
"alexmardale",
"krusche"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7970",
"repo": "ls1intum/Artemis",
"url": "https://github.com/ls1intum/Artemis/pull/1918"
}
|
gharchive/pull-request
|
Fix Assessment of modeling exercises without any submission
Checklist
[x] I tested all changes and all related features with different users (student, tutor, instructor, admin) locally
[x] Client: I added multiple screenshots/screencasts of my UI changes
Motivation and Context
Assessment of empty UML-Models should work
Description
Checked the places in the modeling-assessment component where it is assumed that there must be a model from apollon. In those places I added checks to prevent errors
Steps for Testing
Create an exam with at least 2 modeling exercises
Generate Student Exams and participate
Exercise 1: do not submit any solution for the exercise
Exercise 2: submit a model, then delete every element and submit again (empty submission should be counted)
After the exam has ended, switch to tutor account and go to the assessment of the modeling exercise
do an assessment with open browser console -> no error should be displayed (The error you can see in the screenshot is happening, because there isn't any other submission for this exercise which can be assessed )
Screenshots
I was also able to assess without any errors.
When assessing the exercise for which I did not submit, I am actually informed that no model was found:
(but again, assessing this worked without any issues)
The assessment works without any errors. Each non-submitted exercise turns into a submission, though. Is this intended? (e.g. I did not submit anything for the first exercise, but was still able to assess it as if it were an empty submission)
Yes this is intended at the moment, because when Artemis prepares the exercises it automatically creates an empty submission
Please approve if everything else is working correctly
|
2025-04-01T06:39:27.590976
| 2019-07-07T12:18:57
|
464946768
|
{
"authors": [
"krusche",
"sleiss"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7971",
"repo": "ls1intum/Artemis",
"url": "https://github.com/ls1intum/Artemis/pull/625"
}
|
gharchive/pull-request
|
Feature/hibernate query cache
Checklist
[ ] I tested the changes and all related features on the test server https://artemistest.ase.in.tum.de.
[ ] I documented my source code using the JavaDoc / JSDoc style.
[ ] I added integration test cases for the server (Spring) related to the features
[ ] I added integration test cases for the client (Jest) related to the features
[ ] I added screenshots/screencast of my UI changes
[ ] I translated all the newly inserted strings
Motivation and Context
Description
Steps for Testing
Log in to ArTEMiS
Navigate to Course Administration
...
Screenshots
There is not much going here, so I'll close this to keep the open pull requests small.
Feel free to reopen it, when there is additional progress
|
2025-04-01T06:39:27.615123
| 2023-09-27T19:42:14
|
1916204675
|
{
"authors": [
"jonathansick"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7972",
"repo": "lsst-sqre/technote",
"url": "https://github.com/lsst-sqre/technote/pull/18"
}
|
gharchive/pull-request
|
DM-40926: Make the left and right sidebars sticky
Make the content of both sidebar sticky when visible on page and also allow them to independently scroll if longer than the viewport. This is useful for accessing the document outline, for example.
https://github.com/lsst-sqre/technote/assets/349384/51fcf423-c0fd-4a65-bdac-e5d5f868c01d
|
2025-04-01T06:39:27.682825
| 2020-04-10T01:51:05
|
597648413
|
{
"authors": [
"erickzanardo",
"jamie1192",
"luanpotter",
"quangquy87",
"searchy2"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7981",
"repo": "luanpotter/audioplayers",
"url": "https://github.com/luanpotter/audioplayers/issues/492"
}
|
gharchive/issue
|
Hide Notification on ios
How to hide Notification on ios call from flutter?
What do you mean hide Notification? no notifications will be shown unless you ask for.
Closing due inactivity, if there is more info that you can add to the issue, comment here and we can re open the issue.
@erickzanardo not OP but I was subbed to this issue as I was looking for a solution to this myself.
I think OP was referring to clearing the notification center media player notification programmatically, as it is retained there long after it is actually used within your app (eg. if you only use the audio player on a specific page).
@luanpotter Can this issue be reopened? Currently, none of audioplayers' notificationService methods dismiss the iOS headless service and setNotification notification.
The only way to dismiss the notification is to write native code.
There should be a way to dismiss the notification from the player itself.
@searchy2 this is issue don't have much info on it, I think it would be better if a new issue were open, would you mind opening a new one so we can track this more easily?
Sure, I'll open a new issue.
Create a new issue https://github.com/luanpotter/audioplayers/issues/897
|
2025-04-01T06:39:27.726334
| 2024-06-21T17:57:37
|
2367005227
|
{
"authors": [
"AtobaAzul",
"luccaPossamai"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7984",
"repo": "luccaPossamai/optical",
"url": "https://github.com/luccaPossamai/optical/issues/7"
}
|
gharchive/issue
|
Beaming the bottom of a splitter crash
If you shine a laser under (and probably above) a Polarizing Beam Splitter, the game crashes.
latest.log
crash-2024-06-21_14.55.04-server.txt
Steps to reproduce:
https://github.com/luccaPossamai/optical/assets/94794129/92eb4e46-3cae-4aeb-a49a-49d3b941dcc2
Yup, I uploaded to curseforge the version before testing. In newer versions this bug would not exist.
|
2025-04-01T06:39:27.742505
| 2023-12-18T16:38:19
|
2047040646
|
{
"authors": [
"alexpirine",
"pilcrowOnPaper"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7985",
"repo": "lucia-auth/lucia",
"url": "https://github.com/lucia-auth/lucia/pull/1307"
}
|
gharchive/pull-request
|
Add prisma adapter installation instructions
In the Getting started in Next.js App Router documentation page, it might be useful to show how to install the Prisma adapater before using it in the code, so that the onboarded user doesn't wonder if they missed a step.
Might be better if we just mention in the paragraph that the adapters are provided as a separate package?
Might be better if we just mention in the paragraph that the adapters are provided as a separate package?
I think what's weird on this page is that there is a code snippet that is provided, and it doesn't work because of the dependency issue. But if you need to read the full adapter documentation to set it up, it's not clear why the code is provided in the first place.
If it's just for illustrative purposes, maybe we should say that the code should ”looke like this”, and that the exact code depends on the adapter used, and then provide the adapters list to chose from.
I am not very good at writing, so I'm not even sure how to properly fix it. But I'm quite sure that it's a bit misleading to have a piece of code that doesn't work, and if it's not supposed to work, it's not clear why it's presented in the first place.
Hey, I appreciate the PR but we're merging v3 in a few hours so I'll be closing this for now to clean up the repo. You can create a new PR against the v2 branch if you'd like since we're planning to support it for a few more months.
|
2025-04-01T06:39:27.756702
| 2024-05-07T15:39:47
|
2283679742
|
{
"authors": [
"CayllahuaPedro",
"karsa-mistmere"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7986",
"repo": "lucide-icons/lucide",
"url": "https://github.com/lucide-icons/lucide/issues/2138"
}
|
gharchive/issue
|
Module '"lucide-react"' has no exported member 'Notebook'.ts(2305)
Package
[ ] lucide
[ ] lucide-angular
[ ] lucide-flutter
[ ] lucide-preact
[X] lucide-react
[ ] lucide-react-native
[ ] lucide-solid
[ ] lucide-svelte
[ ] lucide-vue
[ ] lucide-vue-next
[ ] Figma plugin
[ ] source/main
[ ] other/not relevant
Version
0.378.0
Can you reproduce this in the latest version?
[X] Yes
[ ] No
Browser
[X] Chrome/Chromium
[ ] Firefox
[ ] Safari
[ ] Edge
[ ] iOS Safari
[ ] Opera
[ ] Other/not relevant
Operating system
[X] Windows
[ ] Linux
[ ] macOS
[ ] ChromeOS
[ ] iOS
[ ] Android
[ ] Other/not relevant
Description
import {
ActivitySquareIcon,
FileUpIcon,
LayoutDashboardIcon,
BadgeDollarSign,
Contact,
Users,
LayoutPanelLeft,
Notebook,
} from "lucide-react";
when I tried to import some icons from the lucide-react package
I got the next error: Module '"lucide-react"' has no exported member 'Notebook'.ts(2305). Needles to say I looked in forums for similar problems. specifically I tried to downgrade to 0.263.0 version but that didn't work.
Steps to reproduce
install latest version
1.import 'notebook' icon
2.get the error
Checklist
[X] I have searched if someone has submitted a similar issue before and there weren't any. (Please make sure to also search closed issues, as this issue might already have been resolved.)
I cannot reproduce this issue, can you make sure an earlier version isn't stuck in some kind of cache and you're actually using v0.378.0, which definitely has this export?
(.next cache, node_modules etc)
I have version 0.378 both in package.json and pnpm-lock.yaml . I tried looking into the cache but vs code wont read it
Can you check if node_modules/lucide-react/dist/esm/notebook.js exists?
As for any cache, you shouldn't be looking into it, but clearing it (since you haven't provided any extra information about your frameworks, I cannot help you with how that should be done, but if it's next.js for example, you may try deleting the entire .next folder, same goes for node_modules, especially if the export above isn't there).
|
2025-04-01T06:39:27.765363
| 2024-11-01T01:36:37
|
2628126118
|
{
"authors": [
"gronxb"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7987",
"repo": "lucide-icons/lucide",
"url": "https://github.com/lucide-icons/lucide/issues/2574"
}
|
gharchive/issue
|
SolidStart: createSignal and onMount, etc not working
Package
[ ] lucide
[ ] lucide-angular
[ ] lucide-flutter
[ ] lucide-preact
[ ] lucide-react
[ ] lucide-react-native
[X] lucide-solid
[ ] lucide-svelte
[ ] lucide-vue
[ ] lucide-vue-next
[ ] Figma plugin
[ ] source/main
[ ] other/not relevant
Version
0.454.0
Can you reproduce this in the latest version?
[X] Yes
[ ] No
Browser
[] Chrome/Chromium
[ ] Firefox
[ ] Safari
[ ] Edge
[ ] iOS Safari
[ ] Opera
[ X ] Arc
[ ] Other/not relevant
Operating system
[ ] Windows
[ ] Linux
[X] macOS
[ ] ChromeOS
[ ] iOS
[ ] Android
[ ] Other/not relevant
Description
In SolidStart, importing and using an icon (e.g., AArrowDown from lucide-solid) causes onMount, createSignal, etc not to work.
Steps to reproduce
import { AArrowDown } from "lucide-solid";
const App = () => {
const [count, setCount] = createSignal(0); // not working
onMount(() => {
console.log("Hi"); // not working
});
return (
<div>
<AArrowDown />
<button onClick={() => {
setCount(count() + 1); // not working
}>increase</button>
</div>
);
}
Checklist
[X] I have searched if someone has submitted a similar issue before and there weren't any. (Please make sure to also search closed issues, as this issue might already have been resolved.)
what the… this issue happens only with Arc Browser. Arc Browser is an ad blocker
|
2025-04-01T06:39:27.771728
| 2021-03-13T14:58:56
|
830916805
|
{
"authors": [
"JiriSuster",
"PrivateServersGANG"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7988",
"repo": "lucidrains/deep-daze",
"url": "https://github.com/lucidrains/deep-daze/issues/75"
}
|
gharchive/issue
|
Why
Traceback (most recent call last):
File "c:\users\marshall\appdata\local\programs\python\python38\lib\runpy.py", line 194, in _run_module_as_main
return run_code(code, main_globals, None,
File "c:\users\marshall\appdata\local\programs\python\python38\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\Marshall\AppData\Local\Programs\Python\Python38\Scripts\imagine.exe_main.py", line 4, in
File "c:\users\marshall\appdata\local\programs\python\python38\lib\site-packages\deep_daze_init.py", line 1, in
from deep_daze.deep_daze import DeepDaze, Imagine
File "c:\users\marshall\appdata\local\programs\python\python38\lib\site-packages\deep_daze\deep_daze.py", line 25, in
assert torch.cuda.is_available(), 'CUDA must be available in order to use Deep Daze'
AssertionError: CUDA must be available in order to use Deep Daze
Hi, try this:
pip uninstall torch
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
Whenever i try to open the torch-1.8.0+cu111-cp38-cp38-win_amd64.whl file it instantly closes and i cant see what it says
|
2025-04-01T06:39:27.795434
| 2017-01-31T21:01:25
|
204422110
|
{
"authors": [
"jimmy57000",
"lucko"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7990",
"repo": "lucko/LuckPerms",
"url": "https://github.com/lucko/LuckPerms/issues/158"
}
|
gharchive/issue
|
I have a problem with activate Redis
Good evening everyone!
I currently have a problem that I activate Redis on LuckyPerms my API which also uses Redis this my in error (Error: http://pastebin.com/nn3r6BTj)
It's been 3 days since I search for my I do not find
Thanks you!
Make sure you're packaging the newest Jedis version in your plugin. Seems like there's an issue there?
Could you give me the redis version you used for LuckyPerms? :)
Thanks!
Ah yeah. I just realised, this is a LuckPerms issue.
I shade this version.
https://github.com/lucko/jedis/releases/tag/jedis-2.9.0-shaded
I'll fix that when a get a chance. (probably tomorrow)
I tried to update it in my API
I think the connection system needs to change ..
I get an error in my code! (http://prnt.sc/e2vk2z)
I would probably do with my developer to look at my API :)
Thanks :)
Should be fixed in this build.
https://ci.lucko.me/job/LuckPerms/41/
Let me know if you have any further issues.
|
2025-04-01T06:39:27.799193
| 2022-11-17T21:59:33
|
1454055829
|
{
"authors": [
"lucoiso"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7991",
"repo": "lucoiso/UEProject_Elementus",
"url": "https://github.com/lucoiso/UEProject_Elementus/issues/41"
}
|
gharchive/issue
|
Check the viability and implement Iris Replication
https://docs.unrealengine.com/5.1/en-US/unreal-engine-5.1-release-notes/#networkingandmultiplayer
I haven't found a way to make it work in the released UE5.1 (binary via Epic Launcher), only when compiling the engine from source.
Already tried to force the usage by overriding the target editor options (bOverrideBuildEnvironment) and adding a bUseIris = true, in addition to adding SetupIrisSupport(Target) in the modules, but it didn't work, it gives linking errors. And there is no IrisCore in the Intermediate folder of the Engine. 🥲
|
2025-04-01T06:39:27.801290
| 2017-02-25T21:05:10
|
210261934
|
{
"authors": [
"lucymonie",
"skibinska"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7992",
"repo": "lucymonie/api-workshop",
"url": "https://github.com/lucymonie/api-workshop/issues/5"
}
|
gharchive/issue
|
New readme for http
@skibinska I think we need a separate readme for http so we can discuss http methods and status codes properly. Are you okay with that?
yeap, I actually mentioned it in my workshop, but it makes more to introduce it here.
Should we rethink the way that we will be presenting this workshop or won't ask the questions about this topic?
I just added some questions! What about JSON? Should we add something about that too?
I would just add to first readme one sentence about JSON - it is a way to store information in an organized, easy-to-access manner and maybe that it looks like a JS object but its keys and values are in quotation mark.
|
2025-04-01T06:39:27.809133
| 2019-02-19T11:17:02
|
411872466
|
{
"authors": [
"RichardHWD",
"lufficc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7993",
"repo": "lufficc/SSD",
"url": "https://github.com/lufficc/SSD/issues/40"
}
|
gharchive/issue
|
What does CENTER(SIZE)_VARIANCE mean in defaults.py?
I don`t know these configure in default.py:
Hard negative mining
_C.MODEL.CENTER_VARIANCE = 0.1
_C.MODEL.SIZE_VARIANCE = 0.2
↑
Theyr used in /ssd/util/box_utils.py when boxes invert into locations or locations invert into boxes.But I dont know why.
change MAX_PER_CLASS to 400 as official caffe code will slightly increase mAP(0.8025=>0.8063, 0.7783=>0.7798)
_C.TEST.MAX_PER_CLASS = 200
_C.TEST.MAX_PER_IMAGE = -1
↑
I dont know these either and I cant find where they`r used in project.
Can anyone help?Thanks all the time.
Variance is used to encode/decode prior bboxes .
https://github.com/weiliu89/caffe/blob/4817bf8b4200b35ada8ed0dc378dceaf38c539e4/examples/ssd/ssd_pascal.py#L322
MAX_PER_CLASS and MAX_PER_IMAGE are used in post_processor.py
|
2025-04-01T06:39:27.813764
| 2016-01-21T13:42:00
|
127926786
|
{
"authors": [
"luin",
"shaharmor"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7994",
"repo": "luin/ioredis",
"url": "https://github.com/luin/ioredis/issues/233"
}
|
gharchive/issue
|
Cluster isn't ready and enableOfflineQueue options is false, with defaults
Hey,
We are using ioredis for quite some time now and its great.
We noticed that sometimes (Can't reproduce really) we have logs of Cluster isn't ready and enableOfflineQueue options is false, even though we are using the default settings of enableOfflineQueue which is true.
Any ideas?
I think it happens when the library didn't finish loading and we are already sending commands to it
Sorry for my late response. It's really strange since this error only emits when the enableOfflineQueue is false: https://github.com/luin/ioredis/blob/master/lib/cluster.js#L479-L486.
What do you mean by the library didn't finish loading? Are you sending commands before setting cluster.options.enableOfflineQueue = true?
I'm not setting cluster.options.enableOfflineQueue = true at all as it is the default, but maybe its possible that the library didn't yet set this property itself and then when its getting to the line you marked its seeing the value as undefined or something...
I don't think it's possible since the options is defined in the constructor of Cluster (https://github.com/luin/ioredis/blob/master/lib/cluster.js#L64).
I actually managed to replicate it. Trying to create a simple script now
@luin found it. PR ready
|
2025-04-01T06:39:27.869897
| 2021-10-30T07:03:35
|
1040072620
|
{
"authors": [
"grantholle",
"janat08",
"lukeed"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7995",
"repo": "lukeed/worktop",
"url": "https://github.com/lukeed/worktop/issues/103"
}
|
gharchive/issue
|
Documentation plans
Hi Luke, nice to meet you. I'm new to Workers (and still exploring) and found your library from watching an episode of Fauna Live where the guest was Obinna Ekwuno. They used this library to build their application.
In that video, they mentioned setting the type in the wrangler.toml file to webpack, but I cannot find that reference anywhere here so I'm a little confused. I know this library is semi-"just starting out" but was curious if you had plans to do some documentation for other dumb-dumbs like myself who are a little slow and need hand holding.
You have great examples, it's just a little hard for me to visualize going from wrangler init to one of those examples. Would also love to contribute if you're up for it.
Thanks!
People use miniflare with esbuild. I know there's a decent starter out there for that, if not already official one. They should have that straight up in getting started.
Hey, so worktop has no relation or dependence on wrangler actually. You can build a worktop project any number of ways, including the worktop.build package (#94). Wrangler includes a webpack integration which only kicks in if you have type = "webpack" at the top, but my strong personal recommendation is to avoid this as it means you really have no idea what's going into your final worker.
Separately, Wrangler will deploy file(s) to your CF account as the Workers deploy step. There are other tools out there that does this too, including cfw which is what the /examples use. You can read more about Wrangler configuration here though.
An example wranger.toml file may look like this:
name = "example-worker"
type = "javascript"
account_id = "..."
zone_id = "..."
[build]
command = "npm run build"
[build.upload]
format = "service-worker"
# or, format = "modules"
# but "modules" requires addl config
Closing in favor of #61 as there will be a worktop-specific project scaffolder in near future.
|
2025-04-01T06:39:27.879416
| 2020-03-16T07:55:34
|
582078589
|
{
"authors": [
"Ash258",
"hago"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7996",
"repo": "lukesampson/scoop-extras",
"url": "https://github.com/lukesampson/scoop-extras/issues/3707"
}
|
gharchive/issue
|
<EMAIL_ADDRESS>hash check failed
Updating 'vscodium' (1.42.1 -> 1.43.0)
Downloading new version
VSCodium-win32-x64-1.43.0.zip (82.9 MB) [=========================================================================================] 100%
Checking hash of VSCodium-win32-x64-1.43.0.zip ... ERROR Hash check failed!
App: extras/vscodium
URL: https://github.com/VSCodium/vscodium/releases/download/1.43.0/VSCodium-win32-x64-1.43.0.zip
First bytes: 50 4B 03 04 14 00 00 00
Expected: f81b637f0a680ce2aa2616564b34ff0654609e7a28c1830147d11c685271319c
Actual: e6d0880abafa2ab94589e61ad880d88d6f6cec34e6e202c66e15f532c4f75b53
Still failed hash checking, local cache has been cleaned.
Updating 'vscodium' (1.42.1 -> 1.43.0)
Downloading new version
VSCodium-win32-x64-1.43.0.zip (82.9 MB) [=========================================================================================] 100%
Checking hash of VSCodium-win32-x64-1.43.0.zip ... ERROR Hash check failed!
App: extras/vscodium
URL: https://github.com/VSCodium/vscodium/releases/download/1.43.0/VSCodium-win32-x64-1.43.0.zip
First bytes: 50 4B 03 04 14 00 00 00
Expected: f81b637f0a680ce2aa2616564b34ff0654609e7a28c1830147d11c685271319c
Actual: e6d0880abafa2ab94589e61ad880d88d6f6cec34e6e202c66e15f532c4f75b53
Please try again or create a new issue by using the following link and paste your console output:
https://github.com/lukesampson/scoop-extras/issues/new?title=vscodium%401.43.0%3A+hash+check+failed
...
scoop update; scoop update vscodium -f
thanks, update successfully!
|
2025-04-01T06:39:27.883747
| 2018-12-10T19:21:49
|
389448867
|
{
"authors": [
"r0mflip",
"rasa"
],
"license": "unlicense",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7997",
"repo": "lukesampson/scoop",
"url": "https://github.com/lukesampson/scoop/issues/2877"
}
|
gharchive/issue
|
Use Windows binary of openssl over mingw binaries
Currently a mingw version of openssl and by default it checks for configuration files in /usr/share/. When openssl is ran programatically (like in https://www.npmjs.com/package/pem) we might run into "path format" issues.
Instead use a windows binary (https://slproweb.com/products/Win32OpenSSL.html)
https://github.com/lukesampson/scoop/blob/master/bucket/openssl-slp.json
Sorry, thanks.
If openssl has issues, perhaps we should rename openssl-slp.json to openssl.json, and rename openssl.json to openssl-mingw.json?
mingw version of openssl is using UNIX pathnames /usr/bin/... and fails immediately (which is a bumer for starters and packages) but the slp version uses the env variable. IDK but the mingw version has any such env features, it's not even much useful in WSL too.
Renaming might break stuff for old users but the slp version is the one everyone should be using.
If no one disagrees, I think we should make the switch. I only use the SLP version.
Can I make a PR then?
Sure, thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.