id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
480318133 | Glance is not replacing the Smeevil's Penance ward
Build: Custom
Cosmetic visible:
Smeevil's penance
There are a couple of wards in this list that are shown with a different size. Those are
Hellgazer
Black Pool Watcher
Staff of Faith
Spring Ward
as seen right next to Smeevil's Penance
there is another issue referring smeevil - unfortunately valve uses different scale so if the ward is replaced with the default one it looks way oversized
as a compromise all alternative colorfull styles have been disabled
those should have close enough scale, unlike the smeevil that was 1.6x
| gharchive/issue | 2019-08-13T19:05:42 | 2025-04-01T04:32:53.254409 | {
"authors": [
"AveYo",
"Managor"
],
"repo": "No-Bling/DOTA",
"url": "https://github.com/No-Bling/DOTA/issues/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
669330215 | No_Manz_Sky's File
Add a image to the side deck.
I merge in a quotation! Looks good
| gharchive/pull-request | 2020-07-31T01:47:54 | 2025-04-01T04:32:53.255239 | {
"authors": [
"No-Manz-Sky"
],
"repo": "No-Manz-Sky/github-slideshow",
"url": "https://github.com/No-Manz-Sky/github-slideshow/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1692967541 | Print forms
Informed consent, release of liability, training forms, assessment forms, etc
1
| gharchive/issue | 2023-05-02T19:17:29 | 2025-04-01T04:32:53.256798 | {
"authors": [
"NoJuanNobody"
],
"repo": "NoJuanNobody/BalanceAndDevelopment",
"url": "https://github.com/NoJuanNobody/BalanceAndDevelopment/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1846192496 | 🛑 rimgo ri.zzls.xyz is down
In 830a7b9, rimgo ri.zzls.xyz (https://ri.zzls.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: rimgo ri.zzls.xyz is back up in dab095f.
| gharchive/issue | 2023-08-11T04:50:58 | 2025-04-01T04:32:53.259176 | {
"authors": [
"NoPlagiarism"
],
"repo": "NoPlagiarism/services-personal-upptime",
"url": "https://github.com/NoPlagiarism/services-personal-upptime/issues/2591",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1731688574 | 🛑 rimgo rimgo.in.projectsegfau.lt is down
In 215a954, rimgo rimgo.in.projectsegfau.lt (https://rimgo.in.projectsegfau.lt) was down:
HTTP code: 0
Response time: 0 ms
Resolved: rimgo rimgo.in.projectsegfau.lt is back up in 6258ead.
| gharchive/issue | 2023-05-30T07:33:42 | 2025-04-01T04:32:53.262529 | {
"authors": [
"NoPlagiarism"
],
"repo": "NoPlagiarism/services-personal-upptime",
"url": "https://github.com/NoPlagiarism/services-personal-upptime/issues/321",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2083991459 | 🛑 AnonymousOverflow overflow.lunar.icu is down
In 562ab35, AnonymousOverflow overflow.lunar.icu (https://overflow.lunar.icu) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AnonymousOverflow overflow.lunar.icu is back up in 646ea0a after 8 minutes.
| gharchive/issue | 2024-01-16T13:41:29 | 2025-04-01T04:32:53.264968 | {
"authors": [
"Mine1984Craft"
],
"repo": "NoPlagiarism/services-personal-upptime",
"url": "https://github.com/NoPlagiarism/services-personal-upptime/issues/5765",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2120097358 | 🛑 BreezeWiki breezewiki.pussthecat.org is down
In a2acfa2, BreezeWiki breezewiki.pussthecat.org (https://breezewiki.pussthecat.org) was down:
HTTP code: 502
Response time: 805 ms
Resolved: BreezeWiki breezewiki.pussthecat.org is back up in 23f0180 after 14 minutes.
| gharchive/issue | 2024-02-06T07:00:18 | 2025-04-01T04:32:53.268117 | {
"authors": [
"Mine1984Craft"
],
"repo": "NoPlagiarism/services-personal-upptime",
"url": "https://github.com/NoPlagiarism/services-personal-upptime/issues/6144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2204480137 | 🛑 libreddit l.opnxng.com is down
In 3c31adc, libreddit l.opnxng.com (https://l.opnxng.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: libreddit l.opnxng.com is back up in 18571b6 after 23 minutes.
| gharchive/issue | 2024-03-24T18:02:17 | 2025-04-01T04:32:53.271114 | {
"authors": [
"Mine1984Craft"
],
"repo": "NoPlagiarism/services-personal-upptime",
"url": "https://github.com/NoPlagiarism/services-personal-upptime/issues/7066",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
61668094 | Eager Loading
Would it be acceptable to add options for doing eager loading when logging in a user?
E.g.
def find_by_credentials(*credentials)
# ...
arel = @klass.where(relation)
arel = arel.includes(@eager_loading_options) if @eager_loading_options.present?
arel.first
end
config.user_config do |user|
user.eager_loading = {things: {other_things: [:then, :that, :stuff]}}
end
Hi @jacaetevha,
thanks for suggestion. I don't think we should add this, though. The main reason is that it goes beyond the scope of Sorcery - it has nothing to do with authentication. Another reason is that there are a few places where user model is loaded - not only find_by_credentials but also load_from_provider, find_by_token etc, so adding this would add a lot of complexity.
I'd also like to be able to use includes to do eager loading on my users, to load (for instance) associated permissions used for authorization.
The pull request for adding a scope_for_authentication seems like it'd support this use case: https://github.com/NoamB/sorcery/pull/727
| gharchive/issue | 2015-03-14T18:02:12 | 2025-04-01T04:32:53.274182 | {
"authors": [
"arnvald",
"ivanreese",
"jacaetevha"
],
"repo": "NoamB/sorcery",
"url": "https://github.com/NoamB/sorcery/issues/683",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2297804498 | eth_sign arguments are reversed
Version of Hardhat
2.22.4
What happened?
Calling eth_sign via RPC against a node run with npx hardhat node does not work and returns an error.
Minimal reproduction steps
npx hardhat node
curl -H 'Content-Type: application/json' http://127.0.0.1:8545 -X POST --data-raw '{"method":"eth_sign","params":["0x8626f6940E2eb28930eFb4CeF49B2d1F2C9C1199","0x06fdde03"],"id":574,"jsonrpc":"2.0"}'
returns
{"jsonrpc":"2.0","id":574,"error":{"code":-32602,"message":"invalid value \"0x06fdde03\" supplied to : ADDRESS at line 1 column 89","data":{"message":"invalid value \"0x06fdde03\" supplied to : ADDRESS at line 1 column 89","data":{"method":"eth_sign","params":["0x8626f6940E2eb28930eFb4CeF49B2d1F2C9C1199","0x06fdde03"]}}}}
Reversing the arguments:
curl -H 'Content-Type: application/json' http://127.0.0.1:8545 -X POST --data-raw '{"method":"eth_sign","params":["0x06fdde03", "0x8626f6940E2eb28930eFb4CeF49B2d1F2C9C1199"],"id":574,"jsonrpc":"2.0"}'
and a signature is returned as expected
{"jsonrpc":"2.0","id":574,"result":"0x59002ebdb95ef3b258613f6bba6e91fd392bc49d86ad8a1e1afd32b929ee572968529c9189dde90d4a0d315132fcce215b28bfa479fe44cb05021a51e37ab6b81c"}
However, the specification says that the address doing the signing should be the first argument, with the message to sign being second.
Like https://github.com/NomicFoundation/edr/issues/399, this was run across this upgrading from a pre-EDR hardhat to a post-EDR hardhat, so it's possible the issue is there, but haven't (successfully) investigated. The text of the error being returned is certainly from EDR, but where the mistake is in the stack is unclear.
Search terms
eth_sign, RPC, EDR
Ugh, thanks a lot for reporting this @area. I didn't confirm this manually, but this code is clearly wrong.
For reference, here are the previous implementations of eth_sign and personal_sign: they do exactly the same thing but the params have a different order.
| gharchive/issue | 2024-05-15T11:48:06 | 2025-04-01T04:32:53.302571 | {
"authors": [
"area",
"fvictorio"
],
"repo": "NomicFoundation/edr",
"url": "https://github.com/NomicFoundation/edr/issues/455",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
479089985 | Filter in mobile (Android - Chrome)
Hi!
I have a web application using NonFactors.MVC6 (version 4.1.1). When the application is executed in Android with chrome, when the column filter dialog box is opened and you want to write sth, the dialog box is closed.
Do you know what I'm doing wrong?
Thanks!
Yes, it was fixed in later versions.
Ok, then I need to update to 5.0.0. Is this version the last one?
Thanks!
Yes, 5th is the latest one.
| gharchive/issue | 2019-08-09T17:24:09 | 2025-04-01T04:32:53.321082 | {
"authors": [
"Muchiachio",
"estela72"
],
"repo": "NonFactors/MVC6.Grid",
"url": "https://github.com/NonFactors/MVC6.Grid/issues/215",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
435770514 | Non required validation in Resources/View template
Hello,
What is the purpose of the validation in the Resources/view template? if (i + 1 < properties.Length). It seems that in any case is executing the same instruction: @:"@property.Name": "@property.Name.Humanize()"
PD: Excellent project, congratulations. It has a very good structure and the templates are useful for scaffolding.
Else statement doesn't add a comma for the last item, some json parsers might not work with trailing one.
| gharchive/issue | 2019-04-22T15:32:43 | 2025-04-01T04:32:53.323188 | {
"authors": [
"Muchiachio",
"forero08"
],
"repo": "NonFactors/MVC6.Template",
"url": "https://github.com/NonFactors/MVC6.Template/issues/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
590238733 | Fix pointer use
The previous use resulted in the warning "Initialization of 'UnsafeBufferPointer' results in a dangling buffer pointer", because the reference to value is only valid for the duration of the call to UnsafeBufferPointer. withUnsafePointer must be used to create a pointer which is also valid for the Data initialization.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Thank you! I believe this is related to the new Xcode and swift 5.2. I also got the same warnings.
| gharchive/pull-request | 2020-03-30T12:07:59 | 2025-04-01T04:32:53.342111 | {
"authors": [
"CLAassistant",
"nrbrook",
"philips77"
],
"repo": "NordicSemiconductor/IOS-Pods-DFU-Library",
"url": "https://github.com/NordicSemiconductor/IOS-Pods-DFU-Library/pull/363",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
538011085 | handleActionButtonTapped some of them didn't work for BGMViewController
I don't know why when I click the "Last" or "First" in handleActionButtonTapped is not showing the actual last or first reading. Also, when I click "Delete" it just remove what is on the tableview but the readings still store in the device. Did I miss some codes? Also, the glucose device only has 3 reading but for some reasons when I load the datas it shows 6. The additional 3 were same values with -10000 I don't know why. So I manually check if the value is negative don't store it. Thanks!!
import UIKit
import CoreBluetooth
import SWXMLHash
import Alamofire
fileprivate func < <T : Comparable>(lhs: T?, rhs: T?) -> Bool {
switch (lhs, rhs) {
case let (l?, r?):
return l < r
case (nil, _?):
return true
default:
return false
}
}
fileprivate func > <T : Comparable>(lhs: T?, rhs: T?) -> Bool {
switch (lhs, rhs) {
case let (l?, r?):
return l > r
default:
return rhs < lhs
}
}
class GlucoseMonitorVC: BaseViewController ,CBCentralManagerDelegate, CBPeripheralDelegate, ScannerDelegate, UITableViewDataSource, UITableViewDelegate {
var bluetoothManager : CBCentralManager?
//MARK: - Class properties
var connectedPeripheral : CBPeripheral?
var bgmRecordAccessControlPointCharacteristic : CBCharacteristic?
var readings : [GlucoseReading]
var bgmServiceUUID : CBUUID
var bgmGlucoseMeasurementCharacteristicUUID : CBUUID
var bgmGlucoseMeasurementContextCharacteristicUUID : CBUUID
var bgmRecordAccessControlPointCharacteristicUUID : CBUUID
var batteryServiceUUID : CBUUID
var batteryLevelCharacteristicUUID : CBUUID
var hasEHR = true
var measurmentTime = ""
var glucoseContext: [[GlucoseContext]] = []
var xml: XMLIndexer?
struct GlucoseContext {
var value: String
var date: String
var measurmentTime: String
var tag: Int
}
//MARK: - ViewController outlets
@IBOutlet weak var battery: UIButton!
@IBOutlet weak var bgmTableView: UITableView!
@IBOutlet weak var connectButton: UIButton!
@IBOutlet weak var deviceName: UILabel!
@IBOutlet weak var recordsButton: UIButton!
@IBAction func actionButtonTapped(_ sender: UIButton) {
handleActionButtonTapped(from: sender)
}
@IBAction func connectionButtonTapped(_ sender: AnyObject) {
handleConnectionButtonTapped()
}
//MARK: - UIViewController Methods
required init(coder aDecoder: NSCoder) {
readings = []
bgmServiceUUID = CBUUID(string: ServiceIdentifiers.bgmServiceUUIDString)
bgmGlucoseMeasurementCharacteristicUUID = CBUUID(string: ServiceIdentifiers.bgmGlucoseMeasurementCharacteristicUUIDString)
bgmGlucoseMeasurementContextCharacteristicUUID = CBUUID(string: ServiceIdentifiers.bgmGlucoseMeasurementContextCharacteristicUUIDString)
bgmRecordAccessControlPointCharacteristicUUID = CBUUID(string: ServiceIdentifiers.bgmRecordAccessControlPointCharacteristicUUIDString)
batteryServiceUUID = CBUUID(string: ServiceIdentifiers.batteryServiceUUIDString)
batteryLevelCharacteristicUUID = CBUUID(string: ServiceIdentifiers.batteryLevelCharacteristicUUIDString)
super.init(coder: aDecoder)!
}
override func viewDidLoad() {
super.viewDidLoad()
bgmTableView.dataSource = self
bgmTableView.delegate = self
if hasEHR == true {
connectButton.isEnabled = true
} else {
connectButton.isEnabled = false
connectButton.backgroundColor = .lightGray
}
navigationItem.rightBarButtonItem = UIBarButtonItem(title: "Save", style: .done, target: self, action: #selector(saveItemTapped))
navigationItem.rightBarButtonItem?.isEnabled = false
}
override func viewDidAppear(_ animated: Bool) {
guard getXMLStringFromLibraryIfValid(fileName: ccdFileName) != nil else {
connectButton.isEnabled = false
connectButton.backgroundColor = .lightGray
return
}
connectButton.isEnabled = true
connectButton.backgroundColor = UIColor(red: 0.0, green: 0.718, blue: 0.843, alpha: 1)
}
func handleActionButtonTapped(from view: UIView) {
let alert = UIAlertController(title: nil, message: nil, preferredStyle: .actionSheet)
let data = Data([BGMOpCode.reportStoredRecords.rawValue, BGMOperator.allRecords.rawValue])
alert.addAction(UIAlertAction(title: "Refresh", style: .default) { _ in
if let reading = self.readings.last {
let nextSequence = reading.sequenceNumber + 1
let data = Data([
BGMOpCode.reportStoredRecords.rawValue,
BGMOperator.greaterThanOrEqual.rawValue,
BGMFilterType.sequenceNumber.rawValue,
//Convert Endianess
UInt8(nextSequence & 0xFF),
UInt8(nextSequence >> 8)
])
self.connectedPeripheral?.writeValue(data, for: self.bgmRecordAccessControlPointCharacteristic!, type: .withResponse)
} else {
self.connectedPeripheral?.writeValue(data, for: self.bgmRecordAccessControlPointCharacteristic!, type: .withResponse)
}
self.glucoseContext.removeAll()
})
alert.addAction(UIAlertAction(title: "First", style: .default) { _ in
self.readings.removeAll()
self.bgmTableView.reloadData()
let data = Data([BGMOpCode.reportStoredRecords.rawValue, BGMOperator.first.rawValue])
self.connectedPeripheral?.writeValue(data, for: self.bgmRecordAccessControlPointCharacteristic!, type: .withResponse)
})
alert.addAction(UIAlertAction(title: "Last Reading", style: .default) { _ in
self.readings.removeAll()
self.bgmTableView.reloadData()
let data = Data([BGMOpCode.reportStoredRecords.rawValue, BGMOperator.last.rawValue])
self.connectedPeripheral?.writeValue(data, for: self.bgmRecordAccessControlPointCharacteristic!, type: .withResponse)
})
alert.addAction(UIAlertAction(title: "All", style: .default) { _ in
self.readings.removeAll()
self.bgmTableView.reloadData()
let data = Data([BGMOpCode.reportStoredRecords.rawValue, BGMOperator.allRecords.rawValue])
self.connectedPeripheral?.writeValue(data, for: self.bgmRecordAccessControlPointCharacteristic!, type: .withResponse)
})
alert.addAction(UIAlertAction(title: "Delete All Readings", style: .destructive) { _ in
self.readings.removeAll()
self.bgmTableView.reloadData()
let data = Data([BGMOpCode.deleteStoredRecords.rawValue, BGMOperator.allRecords.rawValue])
self.connectedPeripheral?.writeValue(data, for: self.bgmRecordAccessControlPointCharacteristic!, type: .withResponse)
})
alert.addAction(UIAlertAction(title: "Cancel", style: .cancel))
alert.popoverPresentationController?.sourceView = view
present(alert, animated: true)
}
func handleAboutButtonTapped() {
showAbout(message: AppUtilities.getHelpTextForService(service: .bgm))
}
func handleConnectionButtonTapped() {
guard let manager = bluetoothManager, let peripheral = connectedPeripheral else {
return
}
manager.cancelPeripheralConnection(peripheral)
}
func clearUI() {
readings.removeAll()
DispatchQueue.main.async {
self.bgmTableView.reloadData()
self.deviceName.text = "DEFAULT_BGM"
self.battery.tag = 0
self.battery.setTitle("n/a", for: .disabled)
}
}
func enableActionButton() {
recordsButton.isEnabled = true
recordsButton.backgroundColor = UIColor(red: 0.012, green: 0.718, blue: 0.843, alpha: 1)
recordsButton.setTitleColor(UIColor.white, for: .normal)
}
func disableActionButton() {
recordsButton.isEnabled = false
recordsButton.backgroundColor = UIColor.lightGray
recordsButton.setTitleColor(UIColor.lightText, for: .normal)
}
func setupNotifications() {
if UIApplication.instancesRespond(to: #selector(UIApplication.registerUserNotificationSettings(_:))) {
UIApplication.shared.registerUserNotificationSettings(UIUserNotificationSettings(types: [.alert, .sound], categories: nil))
}
}
func addNotificationObservers() {
NotificationCenter.default.addObserver(self, selector: #selector(self.applicationDidEnterBackgroundHandler),
name: UIApplication.didEnterBackgroundNotification,
object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(self.applicationDidBecomeActiveHandler),
name: UIApplication.didBecomeActiveNotification,
object: nil)
}
func removeNotificationObservers() {
NotificationCenter.default.removeObserver(self, name: UIApplication.didBecomeActiveNotification,
object: nil)
NotificationCenter.default.removeObserver(self, name: UIApplication.didEnterBackgroundNotification,
object: nil)
}
@objc func applicationDidEnterBackgroundHandler() {
let name = connectedPeripheral?.name ?? "peripheral"
AppUtilities.showBackgroundNotification(message: "You are still connected to \(name). It will collect data also in background.")
}
@objc func applicationDidBecomeActiveHandler(){
UIApplication.shared.cancelAllLocalNotifications()
}
//MARK: - CBPeripheralDelegate Methods
func peripheral(_ peripheral: CBPeripheral, didDiscoverServices error: Error?) {
guard error == nil else {
print("An error occured while discovering services: \(error!.localizedDescription)")
bluetoothManager!.cancelPeripheralConnection(peripheral)
return
}
for aService: CBService in peripheral.services! {
if aService.uuid.isEqual(bgmServiceUUID) {
peripheral.discoverCharacteristics(
[bgmGlucoseMeasurementCharacteristicUUID, bgmGlucoseMeasurementContextCharacteristicUUID, bgmRecordAccessControlPointCharacteristicUUID],
for: aService)
} else if aService.uuid.isEqual(batteryServiceUUID){
peripheral.discoverCharacteristics([batteryLevelCharacteristicUUID], for: aService)
}
}
}
func peripheral(_ peripheral: CBPeripheral, didDiscoverCharacteristicsFor service: CBService, error: Error?) {
guard error == nil else {
print("Error occurred while discovering characteristic: \(error!.localizedDescription)")
bluetoothManager!.cancelPeripheralConnection(peripheral)
return
}
if service.uuid.isEqual(bgmServiceUUID) {
for aCharacteristic : CBCharacteristic in service.characteristics! {
if aCharacteristic.uuid.isEqual(bgmGlucoseMeasurementCharacteristicUUID){
peripheral.setNotifyValue(true, for: aCharacteristic)
} else if aCharacteristic.uuid.isEqual(bgmGlucoseMeasurementContextCharacteristicUUID) {
peripheral.setNotifyValue(true, for: aCharacteristic)
} else if aCharacteristic.uuid.isEqual(bgmRecordAccessControlPointCharacteristicUUID) {
bgmRecordAccessControlPointCharacteristic = aCharacteristic
peripheral.setNotifyValue(true, for: aCharacteristic)
}
}
} else if service.uuid.isEqual(batteryServiceUUID) {
for aCharacteristic : CBCharacteristic in service.characteristics! {
if aCharacteristic.uuid.isEqual(batteryLevelCharacteristicUUID){
peripheral.readValue(for: aCharacteristic)
break
}
}
}
}
func peripheral(_ peripheral: CBPeripheral, didUpdateValueFor characteristic: CBCharacteristic, error: Error?) {
guard error == nil else {
print("Error occurred while updating characteristic value: \(error!.localizedDescription)")
return
}
var array = UnsafeMutablePointer<UInt8>(OpaquePointer(((characteristic.value as NSData?)?.bytes)!))
if characteristic.uuid.isEqual(batteryLevelCharacteristicUUID) {
let batteryLevel = CharacteristicReader.readUInt8Value(ptr: &array)
let text = "\(batteryLevel)%"
DispatchQueue.main.async {
self.battery.setTitle(text, for: .disabled)
if self.battery.tag == 0 {
// If battery level notifications are available, enable them
if characteristic.properties.contains(.notify)
{
self.battery.tag = 1; // mark that we have enabled notifications
peripheral.setNotifyValue(true, for: characteristic)
}
}
}
} else if characteristic.uuid.isEqual(bgmGlucoseMeasurementCharacteristicUUID) {
print("New glucose reading")
let reading = GlucoseReading(array)
if let index = readings.firstIndex(of: reading) {
readings[index] = reading
} else {
if reading.glucoseConcentration > 0 {
readings.append(reading)
}
}
} else if characteristic.uuid.isEqual(bgmGlucoseMeasurementContextCharacteristicUUID) {
let context = GlucoseReadingContext(array)
if let index = readings.firstIndex(where: { $0.sequenceNumber == context.sequenceNumber }) {
readings[index].context = context
} else {
print("Glucose measurement with sequence number: \(context.sequenceNumber) not found")
}
} else if characteristic.uuid.isEqual(bgmRecordAccessControlPointCharacteristicUUID) {
print("OpCode: \(array[0]), Operator: \(array[2])")
DispatchQueue.main.async {
switch BGMResponseCode(rawValue:array[2])! {
case .success:
self.bgmTableView.reloadData()
case .opCodeNotSupported:
AppUtilities.showAlert(title: "Error", andMessage: "Operation not supported", from: self)
case .noRecordsFound:
AppUtilities.showAlert(title: "Error", andMessage: "No records found", from: self)
case .operatorNotSupported:
AppUtilities.showAlert(title: "Error", andMessage: "Operator not supported", from: self)
case .invalidOperator:
AppUtilities.showAlert(title: "Error", andMessage: "Invalid operator", from: self)
case .operandNotSupported:
AppUtilities.showAlert(title: "Error", andMessage: "Operand not supported", from: self)
case .invalidOperand:
AppUtilities.showAlert(title: "Error", andMessage: "Invalid operand", from: self)
case .abortUnsuccessful:
AppUtilities.showAlert(title: "Error", andMessage: "Abort unsuccessful", from: self)
case .procedureNotCompleted:
AppUtilities.showAlert(title: "Error", andMessage: "Procedure not completed", from: self)
case .reserved:
break
}
}
}
}
//MARK: - CBCentralManagerDelegate Methdos
func centralManagerDidUpdateState(_ central: CBCentralManager) {
if central.state == .poweredOff {
print("Bluetooth powered off")
} else {
print("Bluetooth powered on")
}
}
func centralManagerDidSelectPeripheral(withManager aManager: CBCentralManager, andPeripheral aPeripheral: CBPeripheral) {
connectedPeripheral = aPeripheral
connectedPeripheral?.delegate = self
bluetoothManager = aManager
bluetoothManager?.delegate = self
let options = NSDictionary(object: NSNumber(value: true as Bool), forKey: CBConnectPeripheralOptionNotifyOnNotificationKey as NSCopying)
bluetoothManager?.connect(aPeripheral, options: options as? [String : AnyObject])
}
func centralManager(_ central: CBCentralManager, didConnect peripheral: CBPeripheral) {
connectedPeripheral = peripheral
peripheral.discoverServices([bgmServiceUUID, batteryServiceUUID])
DispatchQueue.main.async {
self.deviceName.text = peripheral.name
self.connectButton.setTitle("DISCONNECT", for: .normal)
self.enableActionButton()
self.setupNotifications()
}
}
func centralManager(_ central: CBCentralManager, didFailToConnect peripheral: CBPeripheral, error: Error?) {
DispatchQueue.main.async {
AppUtilities.showAlert(title: "Error", andMessage: "Connecting to peripheral failed. Please Try again", from: self)
self.connectButton.setTitle("CONNECT", for: .normal)
self.connectedPeripheral = nil
self.disableActionButton()
self.clearUI()
}
}
func centralManager(_ central: CBCentralManager, didDisconnectPeripheral peripheral: CBPeripheral, error: Error?) {
DispatchQueue.main.async {
self.connectButton.setTitle("CONNECT", for: .normal)
self.connectedPeripheral = nil
if AppUtilities.isApplicationInactive() == true {
let name = peripheral.name ?? "Peripheral"
AppUtilities.showBackgroundNotification(message: "\(name) is disconnected.")
}
self.disableActionButton()
self.clearUI()
self.removeNotificationObservers()
}
}
//MARK: - UITableViewDataSoruce methods
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return readings.count
}
//MARK: - UITableViewDelegate methods
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "BGMCell") as! BGMItemCell
let reading = readings[indexPath.row]
print(reading)
cell.tag = Int(reading.sequenceNumber)
cell.timestamp.text = getFormattedDate(date: reading.timestamp, dateFormat: DateFormatType.monthDayYearTwelveHourTimezNoDash.rawValue)
if glucoseContext.count == 0 {
cell.checkImageView.image = UIImage(named: "uncheck")
cell.measurementTimeLabel.text = "Select measurement time"
navigationItem.rightBarButtonItem?.isEnabled = false
} else {
navigationItem.rightBarButtonItem?.isEnabled = true
}
if reading.glucoseConcentrationTypeAndLocationPresent {
switch reading.unit! {
case .mol_L:
cell.value.text = String(format: "%.1f", reading.glucoseConcentration! * 1000) // mol/l -> mmol/l conversion
cell.unit.text = "mmol/l"
break
case .kg_L:
cell.value.text = String(format: "%.0f", reading.glucoseConcentration! * 100000) // kg/l -> mg/dL conversion
cell.unit.text = "mg/dL"
break
}
} else {
cell.value.text = "-"
cell.unit.text = ""
}
return cell
}
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let titles = ["Before Breakfast", "After Breakfast", "Before Lunch", "After Lunch", "Before Dinner", "After Dinner", "None"]
let cell = tableView.cellForRow(at: indexPath) as? BGMItemCell
var isDuplicate = false
let alert = UIAlertController(title: "Select Measurement Period", message: nil, preferredStyle: .actionSheet)
for alertTitle in titles {
let action = UIAlertAction(title: alertTitle, style: .default, handler:
{ UIAction in self.measurmentTime = alertTitle;
cell?.measurementTimeLabel.text = self.measurmentTime;
cell?.checkImageView.image = UIImage(named: "check");
if self.glucoseContext.count == 0 {
self.glucoseContext.append([GlucoseContext(value: cell?.value.text ?? "",date:(cell?.timestamp.text)?.convertStringtoNewDateString(currentFormat: .monthDayYearTwelveHourTimezNoDash, newFormat: .yearMonthDayTwentyFourHourSeconds) ?? "",measurmentTime: alertTitle, tag: cell?.tag ?? 0)])
self.navigationItem.rightBarButtonItem?.isEnabled = true
} else {
for (glucoseContextIndex, contextSets) in self.glucoseContext.enumerated() {
for (setIndex, set) in contextSets.enumerated() {
if set.tag == cell?.tag {
self.glucoseContext[glucoseContextIndex][setIndex].measurmentTime = self.measurmentTime
isDuplicate = true
return
} else {
isDuplicate = false
}
}
}
if isDuplicate == false {
self.glucoseContext.append([GlucoseContext(value: cell?.value.text ?? "",date:(cell?.timestamp.text)?.convertStringtoNewDateString(currentFormat: .monthDayYearTwelveHourTimezNoDash, newFormat: .yearMonthDayTwentyFourHourSeconds) ?? "",measurmentTime: alertTitle, tag: cell?.tag ?? 0)])
return
}
}
self.bgmTableView.reloadData()
})
alert.addAction(action)
}
let cancelAction = UIAlertAction(title: "Cancel", style: .cancel, handler:
{ action in cell?.checkImageView.image = UIImage(named: "uncheck");
cell?.measurementTimeLabel.text = "Select Measurement Time"
for (index, contextSets) in self.glucoseContext.enumerated() {
for set in contextSets {
if cell?.tag == set.tag {
self.glucoseContext.remove(at: index)
}
}
}
self.bgmTableView.reloadData()
})
alert.addAction(cancelAction)
present(alert, animated: true)
}
@objc func saveItemTapped(sender: UIBarButtonItem) {
makeCCDCall()
navigationItem.rightBarButtonItem?.isEnabled = true
}
//MARK: - Segue methods
override func shouldPerformSegue(withIdentifier identifier: String, sender: Any?) -> Bool {
return identifier != "scan" || connectedPeripheral == nil
}
}
I believe this issue was related to a previous version of the app. Please, always write what version are you referring to make finding an issue easier. I'm closing the issue now.
Also, sorry for coming so late to you.
| gharchive/issue | 2019-12-15T06:38:49 | 2025-04-01T04:32:53.358980 | {
"authors": [
"pandapancake",
"philips77"
],
"repo": "NordicSemiconductor/IOS-nRF-Toolbox",
"url": "https://github.com/NordicSemiconductor/IOS-nRF-Toolbox/issues/80",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
124260901 | FontAwesome Glyphs Show Up As "Question Mark box".
I built from source.
1,000 words...
Hey @mralexgray
Sorry about that. I'm planning on releasing a new version that won't use Font Awesome glyphs soon.
I would suggest using the official release until then.
| gharchive/issue | 2015-12-29T20:00:47 | 2025-04-01T04:32:53.402944 | {
"authors": [
"Nosrac",
"mralexgray"
],
"repo": "Nosrac/Dictater",
"url": "https://github.com/Nosrac/Dictater/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1084390761 | Docker build does not work
Tried docker build according to instructions, and it did not worked for me (Windows 10 WSL2):
PS D:\projects\games\NotRanged.github.io> docker build -f Dockerfile.dev -t ffxiv-craft-opt-web-dev .
[+] Building 41.7s (6/6) FINISHED
=> [internal] load build definition from Dockerfile.dev 0.2s
=> => transferring dockerfile: 134B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 47B 0.0s
=> [internal] load metadata for docker.io/library/node:4 5.7s
=> [1/2] FROM docker.io/library/node:4@sha256:fab73fccce5abc3fade13a99179884a306aa6c5292a2fc11833ee25ca15c1f85 33.5s
=> => resolve docker.io/library/node:4@sha256:fab73fccce5abc3fade13a99179884a306aa6c5292a2fc11833ee25ca15c1f85 0.0s
=> => sha256:3d77ce4481b119f00e53bee9b4a443469c42c224db954ddaa2e6b74cd73cd5d0 54.26MB / 54.26MB 14.1s
=> => sha256:d562b1c3ac3f8e29c94c8c31142f96c548bada88cc683404805f5d81c3991f34 43.25MB / 43.25MB 13.9s
=> => sha256:41d0ad2557ea2a9e57e1a458c1d659e92f601586e07dcffef74c9cef542f6f6e 2.01kB / 2.01kB 0.0s
=> => sha256:ef4b194d8fcf4fedc96adf4d99f136f5a31ee2cd38561f7a2e4af1b036b4bf69 7.17kB / 7.17kB 0.0s
=> => sha256:fab73fccce5abc3fade13a99179884a306aa6c5292a2fc11833ee25ca15c1f85 1.73kB / 1.73kB 0.0s
=> => sha256:534514c83d698ad8a2ef994eeedaed92738e401d735e453d47e635cca02901b6 17.58MB / 17.58MB 5.5s
=> => sha256:4b85e68dc01d5ba298262148a77051ac4a8c8a1e138c678682a5dae241ae4db9 131.08MB / 131.08MB 25.9s
=> => sha256:f6a66c5de9dbb091030da992eb589991771342d2c533146e682425e3986e2b20 4.42kB / 4.42kB 15.9s
=> => sha256:7a4e7d9a081d8b9504e94524205815677508c75739a9ad82ef94591b3a767335 117.62kB / 117.62kB 16.6s
=> => extracting sha256:3d77ce4481b119f00e53bee9b4a443469c42c224db954ddaa2e6b74cd73cd5d0 4.3s
=> => sha256:876b13112871a940a632a8e7178ffd1021950f5031a3e2d720d819182c7afd10 12.35MB / 12.35MB 20.6s
=> => sha256:95d109ce6b5dd7bdbfe861e463c749aae30fb268f52fab41fe3c706263410458 1.06MB / 1.06MB 17.7s
=> => extracting sha256:534514c83d698ad8a2ef994eeedaed92738e401d735e453d47e635cca02901b6 1.2s
=> => extracting sha256:d562b1c3ac3f8e29c94c8c31142f96c548bada88cc683404805f5d81c3991f34 3.8s
=> => extracting sha256:4b85e68dc01d5ba298262148a77051ac4a8c8a1e138c678682a5dae241ae4db9 5.7s
=> => extracting sha256:f6a66c5de9dbb091030da992eb589991771342d2c533146e682425e3986e2b20 0.0s
=> => extracting sha256:7a4e7d9a081d8b9504e94524205815677508c75739a9ad82ef94591b3a767335 0.0s
=> => extracting sha256:876b13112871a940a632a8e7178ffd1021950f5031a3e2d720d819182c7afd10 1.0s
=> => extracting sha256:95d109ce6b5dd7bdbfe861e463c749aae30fb268f52fab41fe3c706263410458 0.1s
=> [2/2] WORKDIR /usr/src/app 2.2s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:49adbf6df8c011d6257a9b5721064a206eb495ed04b30169eac8c836762c651b 0.0s
=> => naming to docker.io/library/ffxiv-craft-opt-web-dev 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
PS D:\projects\games\NotRanged.github.io> docker run --rm -it -p 8001:8001 ffxiv-craft-opt-web-dev
npm ERR! Linux 5.10.60.1-microsoft-standard-WSL2
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "start"
npm ERR! node v4.9.1
npm ERR! npm v2.15.11
npm ERR! path /usr/src/app/package.json
npm ERR! code ENOENT
npm ERR! errno -2
npm ERR! syscall open
npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
npm ERR! enoent This is most likely not a problem with npm itself
npm ERR! enoent and is related to npm not being able to find a file.
npm ERR! enoent
npm ERR! Please include the following file with any support request:
npm ERR! /usr/src/app/npm-debug.log
PS D:\projects\games\NotRanged.github.io>
Possibly instruction should be clarified. Also, I've tried non-dev docker on the same environment, and it failed:
=> ERROR [3/1] RUN npm install && npm cache clean --force 28.9s
------
> [3/1] RUN npm install && npm cache clean --force:
#7 1.022 npm WARN package.json ffxiv-craft-opt-web@0.0.1 No README data
#7 1.028 npm WARN package.json ffxiv-craft-opt-web@0.0.1 No license field.
#7 3.475 npm WARN engine browser-sync@2.27.7: wanted: {"node":">= 8.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 8.906 npm WARN engine chokidar@3.5.2: wanted: {"node":">= 8.10.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 8.907 npm WARN engine http-proxy@1.18.1: wanted: {"node":">=8.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 8.908 npm WARN engine micromatch@4.0.4: wanted: {"node":">=8.6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 8.908 npm WARN engine yargs@15.4.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 8.915 npm WARN engine localtunnel@2.0.2: wanted: {"node":">=8.3.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 8.915 npm WARN engine browser-sync-client@2.27.7: wanted: {"node":">=8.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 9.705 npm WARN deprecated debug@4.1.1: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
#7 10.21 npm WARN engine picomatch@2.3.0: wanted: {"node":">=8.6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.21 npm WARN engine braces@3.0.2: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.64 npm WARN engine is-binary-path@2.1.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.64 npm WARN engine braces@3.0.2: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.64 npm WARN engine anymatch@3.1.2: wanted: {"node":">= 8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.64 npm WARN engine glob-parent@5.1.2: wanted: {"node":">= 6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.64 npm WARN engine readdirp@3.6.0: wanted: {"node":">=8.10.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.64 npm WARN engine fsevents@2.3.2: wanted: {"node":"^8.16.0 || ^10.6.0 || >=11.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 10.66 npm WARN optional dep failed, continuing fsevents@2.3.2
#7 11.39 npm WARN engine engine.io@3.5.0: wanted: {"node":">=8.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 11.44 npm WARN engine debug@4.3.2: wanted: {"node":">=6.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 11.44 npm WARN engine yargs@17.1.1: wanted: {"node":">=12"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 11.67 npm WARN engine picomatch@2.3.0: wanted: {"node":">=8.6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 11.67 npm WARN engine picomatch@2.3.0: wanted: {"node":">=8.6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 12.08 npm WARN engine binary-extensions@2.2.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 12.34 npm WARN engine fill-range@7.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 12.34 npm WARN engine fill-range@7.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.37 npm WARN engine ws@7.4.6: wanted: {"node":">=8.3.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.38 npm WARN engine to-regex-range@5.0.1: wanted: {"node":">=8.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.38 npm WARN engine to-regex-range@5.0.1: wanted: {"node":">=8.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.57 npm WARN engine get-caller-file@2.0.5: wanted: {"node":"6.* || 8.* || >= 10.*"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.57 npm WARN engine string-width@4.2.3: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.57 npm WARN engine find-up@4.1.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 14.57 npm WARN engine yargs-parser@18.1.3: wanted: {"node":">=6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.56 npm WARN engine strip-ansi@6.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.57 npm WARN engine is-fullwidth-code-point@3.0.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.64 npm WARN engine path-exists@4.0.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.64 npm WARN engine locate-path@5.0.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.67 npm WARN peerDependencies The peer dependency bufferutil@^4.0.1 included from ws will no
#7 15.67 npm WARN peerDependencies longer be automatically installed to fulfill the peerDependency
#7 15.67 npm WARN peerDependencies in npm 3+. Your application will need to depend on it explicitly.
#7 15.67 npm WARN peerDependencies The peer dependency utf-8-validate@^5.0.2 included from ws will no
#7 15.67 npm WARN peerDependencies longer be automatically installed to fulfill the peerDependency
#7 15.67 npm WARN peerDependencies in npm 3+. Your application will need to depend on it explicitly.
#7 15.69 npm WARN engine strip-ansi@6.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.70 npm WARN engine wrap-ansi@6.2.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 15.71 npm WARN engine camelcase@5.3.1: wanted: {"node":">=6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.54 npm WARN engine ansi-regex@5.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.54 npm WARN engine ansi-regex@5.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.73 npm WARN engine ansi-styles@4.3.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.75 npm WARN engine utf-8-validate@5.0.7: wanted: {"node":">=6.14.2"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.75 npm WARN engine bufferutil@4.0.5: wanted: {"node":">=6.14.2"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.81 npm WARN engine string-width@4.2.3: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.82 npm WARN engine get-caller-file@2.0.5: wanted: {"node":"6.* || 8.* || >= 10.*"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.82 npm WARN engine y18n@5.0.8: wanted: {"node":">=10"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.82 npm WARN engine yargs-parser@20.2.9: wanted: {"node":">=10"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.82 npm WARN engine escalade@3.1.1: wanted: {"node":">=6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 16.84 npm WARN engine p-locate@4.1.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.31 npm WARN engine is-fullwidth-code-point@3.0.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.31 npm WARN engine strip-ansi@6.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.50 npm WARN engine color-convert@2.0.1: wanted: {"node":">=7.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.56 npm WARN engine strip-ansi@6.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.56 npm WARN engine wrap-ansi@7.0.0: wanted: {"node":">=10"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.72 npm WARN engine ansi-regex@5.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.83 npm WARN engine ansi-regex@5.0.1: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.84 npm WARN engine ansi-styles@4.3.0: wanted: {"node":">=8"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 17.99 npm WARN engine color-convert@2.0.1: wanted: {"node":">=7.0.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 18.08
#7 18.08 > bufferutil@4.0.5 install /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/bufferutil
#7 18.08 > node-gyp-build
#7 18.08
#7 20.22 make: Entering directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/bufferutil/build'
#7 20.22 CC(target) Release/obj.target/bufferutil/src/bufferutil.o
#7 20.24 ../src/bufferutil.c:3:22: fatal error: node_api.h: No such file or directory
#7 20.24 #include <node_api.h>
#7 20.24 ^
#7 20.24 compilation terminated.
#7 20.26 make: *** [Release/obj.target/bufferutil/src/bufferutil.o] Error 1
#7 20.26 bufferutil.target.mk:96: recipe for target 'Release/obj.target/bufferutil/src/bufferutil.o' failed
#7 20.26 make: Leaving directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/bufferutil/build'
#7 20.26 gyp ERR! build error
#7 20.27 gyp ERR! stack Error: `make` failed with exit code: 2
#7 20.27 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
#7 20.27 gyp ERR! stack at emitTwo (events.js:87:13)
#7 20.27 gyp ERR! stack at ChildProcess.emit (events.js:172:7)
#7 20.27 gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:12)
#7 20.27 gyp ERR! System Linux 5.10.60.1-microsoft-standard-WSL2
#7 20.27 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#7 20.27 gyp ERR! cwd /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/bufferutil
#7 20.27 gyp ERR! node -v v4.9.1
#7 20.27 gyp ERR! node-gyp -v v3.4.0
#7 20.28 gyp ERR! not ok
#7 20.28 npm WARN engine p-limit@2.3.0: wanted: {"node":">=6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 20.28 npm WARN engine p-try@2.2.0: wanted: {"node":">=6"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 20.29
#7 20.29 > utf-8-validate@5.0.7 install /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/utf-8-validate
#7 20.29 > node-gyp-build
#7 20.29
#7 21.23 make: Entering directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/utf-8-validate/build'
#7 21.24 CC(target) Release/obj.target/validation/src/validation.o
#7 21.26 ../src/validation.c:4:22: fatal error: node_api.h: No such file or directory
#7 21.26 #include <node_api.h>
#7 21.26 ^
#7 21.26 compilation terminated.
#7 21.30 validation.target.mk:96: recipe for target 'Release/obj.target/validation/src/validation.o' failed
#7 21.30 make: Leaving directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/utf-8-validate/build'
#7 21.30 make: *** [Release/obj.target/validation/src/validation.o] Error 1
#7 21.30 gyp ERR! build error
#7 21.31 gyp ERR! stack Error: `make` failed with exit code: 2
#7 21.31 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
#7 21.31 gyp ERR! stack at emitTwo (events.js:87:13)
#7 21.31 gyp ERR! stack at ChildProcess.emit (events.js:172:7)
#7 21.31 gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:12)
#7 21.31 gyp ERR! System Linux 5.10.60.1-microsoft-standard-WSL2
#7 21.31 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#7 21.31 gyp ERR! cwd /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/engine.io/node_modules/utf-8-validate
#7 21.31 gyp ERR! node -v v4.9.1
#7 21.31 gyp ERR! node-gyp -v v3.4.0
#7 21.31 gyp ERR! not ok
#7 22.47 npm WARN engine ws@7.4.6: wanted: {"node":">=8.3.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 22.47 npm WARN engine ws@7.4.6: wanted: {"node":">=8.3.0"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 22.92 npm WARN engine utf-8-validate@5.0.7: wanted: {"node":">=6.14.2"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 22.93 npm WARN engine bufferutil@4.0.5: wanted: {"node":">=6.14.2"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 22.93 npm WARN engine utf-8-validate@5.0.7: wanted: {"node":">=6.14.2"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 22.93 npm WARN engine bufferutil@4.0.5: wanted: {"node":">=6.14.2"} (current: {"node":"4.9.1","npm":"2.15.11"})
#7 23.22
#7 23.22 > utf-8-validate@5.0.7 install /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate
#7 23.22 > node-gyp-build
#7 23.22
#7 24.18 make: Entering directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate/build'
#7 24.18 CC(target) Release/obj.target/validation/src/validation.o
#7 24.21 ../src/validation.c:4:22: fatal error: node_api.h: No such file or directory
#7 24.21 #include <node_api.h>
#7 24.21 ^
#7 24.21 compilation terminated.
#7 24.26 validation.target.mk:96: recipe for target 'Release/obj.target/validation/src/validation.o' failed
#7 24.27 make: *** [Release/obj.target/validation/src/validation.o] Error 1
#7 24.27 make: Leaving directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate/build'
#7 24.27 gyp ERR! build error
#7 24.28 gyp ERR! stack Error: `make` failed with exit code: 2
#7 24.28 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
#7 24.28 gyp ERR! stack at emitTwo (events.js:87:13)
#7 24.28 gyp ERR! stack at ChildProcess.emit (events.js:172:7)
#7 24.28 gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:12)
#7 24.28 gyp ERR! System Linux 5.10.60.1-microsoft-standard-WSL2
#7 24.28 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#7 24.28 gyp ERR! cwd /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate
#7 24.28 gyp ERR! node -v v4.9.1
#7 24.28 gyp ERR! node-gyp -v v3.4.0
#7 24.28 gyp ERR! not ok
#7 24.28
#7 24.28 > utf-8-validate@5.0.7 install /usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate
#7 24.28 > node-gyp-build
#7 24.28
#7 25.19 make: Entering directory '/usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate/build'
#7 25.20 CC(target) Release/obj.target/validation/src/validation.o
#7 25.23 ../src/validation.c:4:22: fatal error: node_api.h: No such file or directory
#7 25.23 #include <node_api.h>
#7 25.23 ^
#7 25.23 compilation terminated.
#7 25.30 validation.target.mk:96: recipe for target 'Release/obj.target/validation/src/validation.o' failed
#7 25.30 make: Leaving directory '/usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate/build'
#7 25.30 make: *** [Release/obj.target/validation/src/validation.o] Error 1
#7 25.30 gyp ERR! build error
#7 25.30 gyp ERR! stack Error: `make` failed with exit code: 2
#7 25.30 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
#7 25.30 gyp ERR! stack at emitTwo (events.js:87:13)
#7 25.30 gyp ERR! stack at ChildProcess.emit (events.js:172:7)
#7 25.30 gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:12)
#7 25.30 gyp ERR! System Linux 5.10.60.1-microsoft-standard-WSL2
#7 25.30 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#7 25.30 gyp ERR! cwd /usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/utf-8-validate
#7 25.31 gyp ERR! node -v v4.9.1
#7 25.31 gyp ERR! node-gyp -v v3.4.0
#7 25.31 gyp ERR! not ok
#7 25.32
#7 25.32 > bufferutil@4.0.5 install /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil
#7 25.32 > node-gyp-build
#7 25.32
#7 26.20 make: Entering directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil/build'
#7 26.21 CC(target) Release/obj.target/bufferutil/src/bufferutil.o
#7 26.23 ../src/bufferutil.c:3:22: fatal error: node_api.h: No such file or directory
#7 26.23 #include <node_api.h>
#7 26.23 ^
#7 26.23 compilation terminated.
#7 26.26 bufferutil.target.mk:96: recipe for target 'Release/obj.target/bufferutil/src/bufferutil.o' failed
#7 26.26 make: Leaving directory '/usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil/build'
#7 26.26 make: *** [Release/obj.target/bufferutil/src/bufferutil.o] Error 1
#7 26.26 gyp ERR! build error
#7 26.27 gyp ERR! stack Error: `make` failed with exit code: 2
#7 26.27 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
#7 26.27 gyp ERR! stack at emitTwo (events.js:87:13)
#7 26.27 gyp ERR! stack at ChildProcess.emit (events.js:172:7)
#7 26.27 gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:12)
#7 26.27 gyp ERR! System Linux 5.10.60.1-microsoft-standard-WSL2
#7 26.27 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#7 26.27 gyp ERR! cwd /usr/src/app/node_modules/browser-sync/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil
#7 26.27 gyp ERR! node -v v4.9.1
#7 26.27 gyp ERR! node-gyp -v v3.4.0
#7 26.27 gyp ERR! not ok
#7 26.28
#7 26.28 > bufferutil@4.0.5 install /usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil
#7 26.28 > node-gyp-build
#7 26.28
#7 27.07 make: Entering directory '/usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil/build'
#7 27.08 CC(target) Release/obj.target/bufferutil/src/bufferutil.o
#7 27.09 ../src/bufferutil.c:3:22: fatal error: node_api.h: No such file or directory
#7 27.09 #include <node_api.h>
#7 27.09 ^
#7 27.09 compilation terminated.
#7 27.12 make: *** [Release/obj.target/bufferutil/src/bufferutil.o] Error 1
#7 27.12 bufferutil.target.mk:96: recipe for target 'Release/obj.target/bufferutil/src/bufferutil.o' failed
#7 27.12 make: Leaving directory '/usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil/build'
#7 27.12 gyp ERR! build error
#7 27.12 gyp ERR! stack Error: `make` failed with exit code: 2
#7 27.12 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
#7 27.12 gyp ERR! stack at emitTwo (events.js:87:13)
#7 27.12 gyp ERR! stack at ChildProcess.emit (events.js:172:7)
#7 27.13 gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:12)
#7 27.13 gyp ERR! System Linux 5.10.60.1-microsoft-standard-WSL2
#7 27.13 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#7 27.13 gyp ERR! cwd /usr/src/app/node_modules/browser-sync/node_modules/browser-sync-ui/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/bufferutil
#7 27.13 gyp ERR! node -v v4.9.1
#7 27.13 gyp ERR! node-gyp -v v3.4.0
#7 27.13 gyp ERR! not ok
#7 27.13 npm ERR! Linux 5.10.60.1-microsoft-standard-WSL2
#7 27.13 npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install"
#7 27.13 npm ERR! node v4.9.1
#7 27.13 npm ERR! npm v2.15.11
#7 27.13 npm ERR! code ELIFECYCLE
#7 27.13
#7 27.13 npm ERR! bufferutil@4.0.5 install: `node-gyp-build`
#7 27.14 npm ERR! Exit status 1
#7 27.14 npm ERR!
#7 27.14 npm ERR! Failed at the bufferutil@4.0.5 install script 'node-gyp-build'.
#7 27.14 npm ERR! This is most likely a problem with the bufferutil package,
#7 27.14 npm ERR! not with npm itself.
#7 27.14 npm ERR! Tell the author that this fails on your system:
#7 27.14 npm ERR! node-gyp-build
#7 27.14 npm ERR! You can get information on how to open an issue for this project with:
#7 27.14 npm ERR! npm bugs bufferutil
#7 27.14 npm ERR! Or if that isn't available, you can get their info via:
#7 27.14 npm ERR!
#7 27.14 npm ERR! npm owner ls bufferutil
#7 27.14 npm ERR! There is likely additional logging output above.
#7 28.76
#7 28.76 npm ERR! Please include the following file with any support request:
#7 28.76 npm ERR! /usr/src/app/npm-debug.log
------
executor failed running [/bin/sh -c npm install && npm cache clean --force]: exit code: 1
PS D:\projects\games\NotRanged.github.io>
Uhhhhhh
I don't use docker idk, the readme is old from the original version of the tool. I just use the browser sync option and it works just fine.
Take a look into this @NotRanged #14
It's been merged, let me know if this fixed it for you @qwhisper
It's been merged, let me know if this fixed it for you @qwhisper
Yes. It works. Docker is a bit better option for me, as I'm not doing active JS development last two years and node.js was dated. If docker is installed, that is the easiest way to run app as there is no need to check and update node.
| gharchive/issue | 2021-12-20T05:58:48 | 2025-04-01T04:32:53.419707 | {
"authors": [
"FaiThiX",
"NotRanged",
"qwhisper"
],
"repo": "NotRanged/NotRanged.github.io",
"url": "https://github.com/NotRanged/NotRanged.github.io/issues/10",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
139097076 | Update PushCommand.cs
timeout.Seconds is a bug here, timeout never works, because you take part Second part of the Timespan, it means if you have 00:10:00.000, this value will be 0. Instead you need to use timeout.TotalSeconds, to have representation of the whole TimeSpan in seconds, so if you have 00:10:00.000 timespan the result will be as intended 600.
I have 40 Mb file that I had to push to the nuget and I had timeout after 5 minutes with any timeout I was putting in command. I just made the fix with reflexil and it works as I need, so please make this change to allow people who use nuget2 to use timeout option correctly.
Hi @anderhil, I'm your friendly neighborhood .NET Foundation Pull Request Bot (You can call me DNFBOT). Thanks for your contribution!
This seems like a small (but important) contribution, so no Contribution License Agreement is required at this point. Real humans will now evaluate your PR.
TTYL, DNFBOT;
Thanks for a cool fix, however push command is no longer being built from this repository. And I believe the issue is fixed (though I might be completely wrong :) )
The new code is here https://github.com/NuGet/NuGet.Client/blob/dev/src/NuGet.Core/NuGet.Protocol.Core.v3/Resources/PackageUpdateResource.cs#L47
Oh and of course you can pick up a build from here https://myget.org/gallery/nugetbuild
Thanks for answer. In our project we use nuget2 and VS 2013, so I cannot use 3rd version of Nuget. That's why I asked for update, I know in 3rd version it's fixed.
Push is only in nuget.exe, we don't plan to ever ship another version of v2 nuget.exe and nuget.exe 3.4 is compatible so you can totally use it with vs2013
| gharchive/pull-request | 2016-03-07T21:17:48 | 2025-04-01T04:32:53.539094 | {
"authors": [
"anderhil",
"dnfclas",
"yishaigalatzer"
],
"repo": "NuGet/NuGet2",
"url": "https://github.com/NuGet/NuGet2/pull/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
91893806 | update nuget gallery to use the latest nuspec validation helpers
We need to update the schema validation in Gallery while uploading a package as per latest changes in 3.0 nuget pack command.
Obsolete
| gharchive/issue | 2015-06-29T19:56:44 | 2025-04-01T04:32:53.540354 | {
"authors": [
"bhuvak",
"maartenba"
],
"repo": "NuGet/NuGetGallery",
"url": "https://github.com/NuGet/NuGetGallery/issues/2570",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
155779993 | NuGet v2 feed returning incorrect license acceptance information
The NuGet v2 feed - https://www.nuget.org/api/v2/ seems to be intermittently returning incorrect information for the license acceptance flag on a NuGet package.
The latest Microsoft.Bcl.Build NuGet package is one example where it seems to be returning false when it should be true. The side effect of this is that Visual Studio 2013 and Xamarin Studio will not prompt the user to accept the license agreement. Note that sometimes you do see a prompt.
The NuGet v3 feed seems to be OK.
I used Fiddler to run a query directly against NuGet.org so I could see the raw data:
https://www.nuget.org/api/v2/Search()?$filter=IsLatestVersion&$skip=0&$top=30&searchTerm='bcl.build'&targetFramework='net45'&includePrerelease=false
The data returned has d:RequireLicenseAcceptance m:type="Edm.Boolean">false</d:RequireLicenseAcceptance> which seems to be incorrect for the Bcl.Build NuGet package.
<m:properties>
<d:Id>Microsoft.Bcl.Build</d:Id>
<d:Version>1.0.21</d:Version>
<d:NormalizedVersion>1.0.21</d:NormalizedVersion>
<d:Authors>Microsoft</d:Authors>
<d:Copyright>Copyright © Microsoft Corporation</d:Copyright>
<d:Created m:type="Edm.DateTime">2014-09-09T19:18:48.487Z</d:Created>
<d:Dependencies></d:Dependencies>
<d:Description>
This package provides build infrastructure components so that projects referencing specific Microsoft packages can successfully build.
Do not directly reference this packages unless you receive a build warning that instructs you to add a reference.
</d:Description>
<d:DownloadCount m:type="Edm.Int32">5812609</d:DownloadCount>
<d:GalleryDetailsUrl>https://www.nuget.org/packages/Microsoft.Bcl.Build/1.0.21</d:GalleryDetailsUrl>
<d:IconUrl>http://go.microsoft.com/fwlink/?LinkID=288859</d:IconUrl>
<d:IsLatestVersion m:type="Edm.Boolean">true</d:IsLatestVersion>
<d:IsAbsoluteLatestVersion m:type="Edm.Boolean">true</d:IsAbsoluteLatestVersion>
<d:IsPrerelease m:type="Edm.Boolean">false</d:IsPrerelease>
<d:Language m:null="true" />
<d:LastUpdated m:type="Edm.DateTime">2014-09-09T19:18:48.487Z</d:LastUpdated>
<d:Published m:type="Edm.DateTime">2014-09-09T19:18:48.487Z</d:Published>
<d:PackageHash>sgHu4mIt0+NVGyI12Bj4hLPypNK55UOH+ologj2LqDCjxq3EbIxe/uAtHjY+fEwbE1dtsAHG8SXHf+V/EYbKTg==</d:PackageHash>
<d:PackageHashAlgorithm>SHA512</d:PackageHashAlgorithm>
<d:PackageSize m:type="Edm.Int64">31401</d:PackageSize>
<d:ProjectUrl>http://go.microsoft.com/fwlink/?LinkID=296436</d:ProjectUrl>
<d:ReportAbuseUrl>https://www.nuget.org/packages/Microsoft.Bcl.Build/1.0.21/ReportAbuse</d:ReportAbuseUrl>
<d:ReleaseNotes m:null="true" />
<d:RequireLicenseAcceptance m:type="Edm.Boolean">false</d:RequireLicenseAcceptance>
<d:Summary>Provides build infrastructure components for Microsoft packages.</d:Summary>
<d:Tags>BCL Microsoft System</d:Tags>
<d:Title>Microsoft BCL Build Components</d:Title>
<d:VersionDownloadCount m:type="Edm.Int32">1745347</d:VersionDownloadCount>
<d:MinClientVersion>2.8.1</d:MinClientVersion>
<d:LastEdited m:null="true" />
<d:LicenseUrl>http://go.microsoft.com/fwlink/?LinkId=329770</d:LicenseUrl>
<d:LicenseNames m:null="true" />
<d:LicenseReportUrl m:null="true" />
</m:properties>
Thanks, looking into it...
We will do a reindex to reflect the correct values. Good catch!
The issue has been resolved.
| gharchive/issue | 2016-05-19T16:28:49 | 2025-04-01T04:32:53.545399 | {
"authors": [
"maartenba",
"mrward"
],
"repo": "NuGet/NuGetGallery",
"url": "https://github.com/NuGet/NuGetGallery/issues/3039",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
257861695 | Invalid version range in v3 registration blob causes client to ignore all versions
https://api.nuget.org/v3/registration3-gz-semver2/microsoft.visualstudio.services.gallery.webapi/index.json contains an invalid version range of [15.106.0.preview] which causes the client to fail when finding the package for packages.config installs.
The result is the error:
Unable to resolve dependency 'Microsoft.VisualStudio.Services.Gallery.WebApi'.
The client could handle this better by only skipping the invalid package version, however I think this should also be handled better on the server side. Invalid packages that cannot be used should not appear in the feed, and invalid data should not appear in the feed even if it is in the nuspec file.
Closing this as dupe https://github.com/NuGet/NuGetGallery/issues/3482 since it has more information.
| gharchive/issue | 2017-09-14T21:26:39 | 2025-04-01T04:32:53.548636 | {
"authors": [
"emgarten",
"shishirx34"
],
"repo": "NuGet/NuGetGallery",
"url": "https://github.com/NuGet/NuGetGallery/issues/4684",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
476420893 | fix "プッシュ"→"push"
https://docs.microsoft.com/ja-jp/nuget/tools/cli-ref-push
The PR is not being built because the target branch live is filtered in .openpublishing.publish.config.json on live branch.
@hyoshioka0128 様、
投稿いただき、ありがとうございます。 ご提案を処理しています。少々お待ちください。
敬具
Microsoft DOCS グローバル エクスペリエンス チーム
@hyoshioka0128 様、
ご提案いただきありがとうございます。 ご提案に同意のため、この PR をマージします。ご提案は、次回のアーティクルの更新でも使用されます。
敬具
Microsoft DOCS グローバル エクスペリエンス チーム
The PR is not being built because the target branch live is filtered in .openpublishing.publish.config.json on live branch.
| gharchive/pull-request | 2019-08-03T06:17:18 | 2025-04-01T04:32:53.551686 | {
"authors": [
"hyoshioka0128",
"olprod",
"srvbpigh"
],
"repo": "NuGet/docs.microsoft.com-nuget.ja-jp",
"url": "https://github.com/NuGet/docs.microsoft.com-nuget.ja-jp/pull/117",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
339149864 | Project file reader should check for no-version
Project file reader should check for a package imported with no-version
This is a thing that happens.
e.g. <PackageReference Include="Microsoft.AspNetCore.App" />
This is a very basic fix for #311
Good point. I have specifically put in a test with a version range, and done the TryParse.
I still want a different wording for log message for nothing vs something not parsed.
| gharchive/pull-request | 2018-07-07T14:28:00 | 2025-04-01T04:32:53.553307 | {
"authors": [
"AnthonySteele"
],
"repo": "NuKeeperDotNet/NuKeeper",
"url": "https://github.com/NuKeeperDotNet/NuKeeper/pull/315",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
212276944 | API for plugins to add extra systems
As well as our modules, it would be nice to add an API that plugins can use to add their own listeners/commands to the bridge.
What would we want to expose?
Access to the bot and the ability to create and register new modules would be what I think an outside developer would need.
| gharchive/issue | 2017-03-06T22:58:14 | 2025-04-01T04:32:53.577852 | {
"authors": [
"Mohron",
"dualspiral"
],
"repo": "NucleusPowered/Phonon",
"url": "https://github.com/NucleusPowered/Phonon/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
254539372 | Search icon position improvement, whole tag text clickeable instead of just the number, title when hovering comment count to indicate that it is a comment count, slight css improvements
Should turn this
Into this
And small other improvements
Coverage remained the same at 39.704% when pulling 3ccfef7eb03af91875451c9b4af2def7911b5df0 on Kiloutre:patch-77 into 2ade6fac8d4547f40b42234138a00ab01e0fc32c on NyaaPantsu:dev.
Coverage remained the same at 39.704% when pulling 3ccfef7eb03af91875451c9b4af2def7911b5df0 on Kiloutre:patch-77 into 2ade6fac8d4547f40b42234138a00ab01e0fc32c on NyaaPantsu:dev.
Coverage remained the same at 39.704% when pulling 170cb33d0b0f42622bc24c628a67d1a0a65daa09 on Kiloutre:patch-77 into 2ade6fac8d4547f40b42234138a00ab01e0fc32c on NyaaPantsu:dev.
Coverage remained the same at 39.704% when pulling bce084bf2fc4390ec5614e7988d24bb83a66e8c0 on Kiloutre:patch-77 into ca7081799de67b39687b36dbc90ee5c7d3eb50cd on NyaaPantsu:dev.
| gharchive/pull-request | 2017-09-01T05:25:07 | 2025-04-01T04:32:53.608828 | {
"authors": [
"Kiloutre",
"coveralls"
],
"repo": "NyaaPantsu/nyaa",
"url": "https://github.com/NyaaPantsu/nyaa/pull/1458",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
367957204 | OpenVPN DNS / Internet issue / NAT issue??
I have the OpenVPN server hosted on an AWS instance. When I connect to it, I can ping other machines in the subnet which is all good. But I cannot reach the internet. I can ping 8.8.8.8 but not google.com.
Routing table (top route is before running OpenVPn and the second one is whilst running OpenVPN)
The private IP is correct and so is the public IP. I initially set it up to use Googles DNS but since changed to cloudflare, then tried remove it all but to no avail.
Question is: Am I doing something wrong? Can I make this easier and just let the user use their own DNS. The machines they reach using the OpenVPN are just private machines (only accessible on with a internal IP address). Accessing anything could be routed via them.
@Nyr I think as you have said its a amazon NAT thing, I have attempted to set this up but to no avail. If you can help do this I can pay you? For a 5 second job, it might be worth it.
I set up the following NAT on the instance running the OpenVPN:
After creating the NAT gateway, any ideas on what to do next? The program recognises I am behind a NAT and asks the for public IP address. I just give it the elastic IP address that I generated on Amazons AWS. Should I give it the NAT gateway or something? I am very confused.
Server:
proto udp
dev tun
sndbuf 0
rcvbuf 0
ca ca.crt
cert server.crt
key server.key
dh dh.pem
auth SHA512
tls-auth ta.key 0
topology subnet
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 1.1.1.1"
keepalive 10 120
cipher AES-256-CBC
comp-lzo
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
log-append openvpn.log
verb 5
crl-verify crl.pem
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
Client:
client
dev tun
proto udp
sndbuf 0
rcvbuf 0
remote 18.202.129.195 1194
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
auth SHA512
cipher AES-256-CBC
comp-lzo
setenv opt block-outside-dns
key-direction 1
verb 6
I want this removed: /sbin/ip route add 0.0.0.0/1 via 10.8.0.1...
Just want to leave this line: /sbin/ip route add 10.0.0.20/24 via 10.8.0.1
TL;DR: I just want to direct 10.0.0.* traffic through the VPN and nothing else. Any ideas?? This will stop the NAT problem too.
https://forums.openvpn.net/viewtopic.php?t=17942
For anyone wanting to do the same ^^
| gharchive/issue | 2018-10-08T21:20:38 | 2025-04-01T04:32:53.619299 | {
"authors": [
"benspring"
],
"repo": "Nyr/openvpn-install",
"url": "https://github.com/Nyr/openvpn-install/issues/528",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
444364924 | TLS certificate failed handcheck
when i'am triing to connect to the vpn the return are :
Wed May 15 12:38:42 2019 MANAGEMENT: >STATE:1557916722,WAIT,,,,,, Wed May 15 12:39:42 2019 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Wed May 15 12:39:42 2019 TLS Error: TLS handshake failed Wed May 15 12:39:42 2019 SIGUSR1[soft,tls-error] received, process restarting Wed May 15 12:39:42 2019 MANAGEMENT: >STATE:1557916782,RECONNECTING,tls-error,,,,, Wed May 15 12:39:42 2019 Restart pause, 5 second(s)
firewall deaktivieren and port forwarding ;)
Indeed, this is a connectivity issue.
Maybe a firewall, maybe you are trying to connect from a restricted network.
If you are using UDP, try TCP protocol and port 443 instead. Or UDP protocol in port 53 if you were using TCP.
| gharchive/issue | 2019-05-15T10:39:59 | 2025-04-01T04:32:53.621374 | {
"authors": [
"MagnusDot",
"NextGenerationcloud",
"Nyr"
],
"repo": "Nyr/openvpn-install",
"url": "https://github.com/Nyr/openvpn-install/issues/613",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
722692808 | The Home page should be removed
It is based on the README file which was only a placeholder. This page should be removed and the new landing page should be the Getting Started one.
The big logo and the note regarding the site's theme are nice, though, so maybe they can be merged into Getting Started.
+1, Getting Started is the best place to start.
| gharchive/issue | 2020-10-15T21:36:54 | 2025-04-01T04:32:53.625440 | {
"authors": [
"earth2marsh",
"segfaultxavi"
],
"repo": "OAI/Documentation",
"url": "https://github.com/OAI/Documentation/issues/14",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
114941299 | Are regex wildcards supported in path?
Is /resources/static/{subpath:*} a valid value for the path?
If yes, then what should be the name of the path param subpath or subpath:*
Here is an example spec:
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "http://localhost",
"resourcePath": "/resources",
"apis": [
{
"path": "/resources/static/{subpath:*}",
"description": "",
"operations": [
{
"type": "void",
"method": "GET",
"nickname": "staticFromPathParam",
"parameters": [
{
"type": "string",
"paramType": "path",
"name": "subpath",
"description": "subpath of the resource",
"required": true,
"allowMultiple": false
}
]
}
]
}
],
"models": {}
}
which fails swagger-tools validate with:
API Declaration (/resources) Errors:
#/apis/0/operations/0/parameters/0/name: API path parameter could not be resolved: subpath
#/apis/0/path: API requires path parameter but it is not defined: subpath:*
2 errors and 0 warnings
cc @webron @wing328
That's not supported by the spec. You can regex the value of it, but not the name itself.
I want to regex the value.
I want to express that all paths of the form /resources/static/* are correct i.e
/resources/static/a is a valid path, with subpath=a
/resources/static/a/b is also a valid path, with subpath=a/b
How do I do that?
How does the spec differentiate this case with the one where only /resources/static/a is valid and /resources/static/a/b is not?
That wasn't what I was referring to by regexing the value. "/" is not valid in the value of a path parameter. We don't support that part of RFC 6570. See #291 (and others) for more information.
https://github.com/swagger-api/swagger-spec/issues/291 is exactly what I am looking for!
I guess I will just +1 that.
Closing this, thanks for the prompt reply.
PS: We have been using this for a long time by naming our path param subpath:*. swagger-tools validate passes, but ofcourse the generated client code is messed up, it has code like public String staticFromPathParam(String subpath:*))
This seems related to https://github.com/OAI/OpenAPI-Specification/issues/892
| gharchive/issue | 2015-11-04T00:22:42 | 2025-04-01T04:32:53.631338 | {
"authors": [
"Nikoolayy1",
"nikhiljindal",
"webron"
],
"repo": "OAI/OpenAPI-Specification",
"url": "https://github.com/OAI/OpenAPI-Specification/issues/502",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
142290879 | version 3.0: additional formats
As there are several issues proposing new formats, here a list of the possible future full picture
Common Name
Type
Format
Comments
octet/(unsigned) byte
integer
uint8
new: unsigned 8 bits
signed byte
integer
int8
new: signed 8 bits
short
integer
int16
new: signed 16 bits
integer
integer
int32
signed 32 bits
long
integer
int64
signed 64 bits
big integer
integer
float/single
number
float
double
number
double
decimal
number
decimal
new: decimal floating-point number, recipient-side internal representation as a binary floating-point number may lead to rounding errors
big decimal
number
string
string
byte
string
byte
base64 encoded characters
url-safe binary
string
base64url
new: base64url encoded characters - #606
binary
string
binary
any sequence of octets
boolean
boolean
date
string
date
As defined by full-date - RFC3339
dateTime
string
date-time
As defined by date-time - RFC3339
time (of day)
string
time
new: As defined by partial-time - RFC3339 - #358
duration
string
duration
new: As defined by xs:dayTimeDuration - XML Schema 1.1 - #359
uuid
string
uuid
new: Universally Unique Identifier (UUID) RFC4122
password
string
password
Used to hint UIs the input needs to be obscured.
:+1:
The format modifier is optional. Please add integer with no format -- not confined to 32/64 bit (a.k.a. BigInteger) and also number without format which is also not constrained to floating point 32/64 bit (a.k.a. BigDecimal).
Rather than tightly couple to uuid format, I suggest just a generic id format that means an opaque identifier string. Whether an ID is a UUID, a hash, a databse primary key, or something else seems more like an implementation detail that should be hidden from the API specification. Many API's make extensive use of id parameters/members but do not overly-specify them as UUID strings (often, because they are not UUIDs - look at bit.ly hashes for example.)
Slightly related, I would prefer not overloading format with what is really role or attribute. While Swagger 2.0 has password, it is not a format but a role that is orthogonal to format. Ditto for other PII like social security number, government id number, etc. (A more generic role for these might be masked which is a UI hint.)
Thus, while uuid is a format, id (if it were to replace uuid) a role, not a format.
@DavidBiesack I actually intended uuid as a format, i.e. a string that has the pattern
uuid = 8HEXDIG "-" 4HEXDIG "-" 4HEXDIG "-" 4HEXDIG "-" 12HEXDIG
No objections to adding the concept of a role and an id role. Still would require a format uuid.
@ralfhandl Such things should be added into JSON Schema standard or extracted into a separate spec.
I can imagine the situation when JSON Schema Draft 5 add format with the same name but different validation rules.
Another problem,for example, to validate your DB or user input on client-side. For that purpose would use pure JSON Schema and it would be strange to use OpenAPI-specific types over there.
And last one, tooling support for such formats will be limited to only OpenAPI tools.
I'm not against extending formats I just say that spec should reference some external doc and not to define them internally.
IMHO, OpenAPI should describe API-specific stuff and reuse existing data validation specs.
@IvanGoncharov Looking at the list of formats supported by Swagger 2.0 we only find one format that is defined by JSON Schema: date-time. The proposed new formats are in line with the existing swagger-specific formats, so adding them would not enter new ground.
Formats are an explicit extension point of JSON Schema for semantic validation, and the OpenAPI Specification could be one of the "authoritative resources that accurately describes interoperable semantic validation".
I'm not aware of other external documents describing formats for semantic validation in JSON Schema.
I oppose requiring id strings to be UUID. (I'm not sure if that was what you meant by "Still would require a format uuid.")
As noted, I also think it is not a good idea to expose internal implementation details such as UUID format in an API definition. id strings (path parameters, query parameters, fields) should be no more than opaque string IDs. Over-specifying them as UUID is fragile and does not allow for non-breaking changes if the underlying implementation changes. Again, look at the bit.ly API which uses hash ID strings, not UUIDs.
I don't require id strings to be UUIDs, I only require uuid strings to be UUIDs. I see the string format uuid similar to the string format date-time - as a validation rule that restricts the allowed / possible values of a string parameter or property. It tells the client that some string values will be accepted, and others will be refused.
As you pointed out above the concept of an "id" is a role and not a format. So we should introduce this new concept in a new, specific way and not mix it up with format.
Could it be that your concept of an "id" is related to the concept of a "primary key"? See #587
What you expect to gain by formally supporting more formats? I realize the format could play a role in code generation, mock data generation, validation and potentially more so I figured I'd ask. I also ask because while writing Swagger tooling in the past, custom formats were easy to support without OpenAPI/Swagger being involved, especially since OpenAPI/Swagger does not dictate or limit which formats you can/cannot use.
Here are a few examples of Node.js code registering custom JSON Schema formats for various reasons:
For mock data generation: https://github.com/apigee-127/sway/blob/master/lib/validation/format-generators.js
For document, request and response validation: https://github.com/apigee-127/sway/blob/master/lib/validation/format-validators.js
Thanks, @ralfhandl for confirming -- makes sense for uuid to be an (optional) format, and id to be a role.
To answer your second question, an id may be a primary key, or there may be a mapping between the two. I want the resource and representation to remain decoupled from the implementation. I'll quote Mike Amundsen:
"Your storage model is not your object model is not your resource model is not your representation model."
"Your storage model is not your object model is not your resource model is not your representation model."
:+1:
@whitlockjc Code generation, mock data generation, validation, easier use of tools that know these formats out-of-the-box, better interoperability due to common agreement on what is e.g. a time or duration, ...
There seems to be demand for more pre-defined formats, see #358, #359, #606, and https://github.com/json-schema/json-schema/wiki/"format"-suggestions.
We are currently using type: number, format: decimal for money values (to make it explicit that these ought to not be mapped to some binary floating point number). Not sure if this needs standardizing.
@ePaul We came up with the same solution for numeric values with decimal mantissa when mapping primitive types to JSON Schema types and formats. If we can find a third person who did this, it's a pattern :-)
We also intended to add a precision extension keyword in our JSON Schema representation for conveying the length of the decimal mantissa, e.g. precision: 34 for a 128-bit decimal floating-point type.
Is that something that you'd also find useful?
Some of our customers needed decimal format for specifying monetary values. Hence we ended up supporting format decimal in our project AutoRest.
I think we're in agreement that there are many people using many formats outside of the documented ones. The question is whether this belongs in the OpenAPI specification as some sort of "formal support" or whether this is a tooling problem. It could very well be both.
I will tag this appropriately so we can discuss.
For code generation we need well defined formats. Since swagger spec defines the REST API, it becomes a contract that server and client need to abide by. It is always nice if your contract is explicit about everything.
Just making an analogy to make my point:
Imagine leasing a house where the contract has many loose ends left for the owner and tenant to interpret as per their choice. This wouldn't be a good scenario.
It seems that type must be a constrained type. format can be interpreted by the codegeneration. for example:
type: string
format: uuid
may fall back to String if UUID is not supported. But you cannot invent a type.
If that's not the mentality, then we must constrain all formats to a fixed set, which may be hard to support inside the OAI.
Code generation, as is validation, are tools to me and do not necessarily need OpenAPI changes for reasons I mentioned above. But one thing I just thought of that could make supporting this make sense would be where the OpenAPI wanted to dictate a minimum set of formats all tools must support. I could see that being useful.
@whitlockjc yes, and with a fallback to primitive types, if not supported. I don't think we should be inventing types.
In general, if we specify a format, we should dictate exactly what that is supposed to be. If a user expects a different behavior from a defined format, well, that violates the spec.
So... to make this concrete:
type: string
format: uuid
Should have a very specific format defined in the spec, specifically what @DavidBiesack mentioned:
uuid = 8HEXDIG "-" 4HEXDIG "-" 4HEXDIG "-" 4HEXDIG "-" 12HEXDIG
type: string
format: date-time
quite specifically says RFC3339 format (https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types)
If I want to invent tonys-date-time then pretty much no tools will know what the heck to do with it, and would fall back to type: string.
@ralfhandl what about instances where one wants to specify not the precision, which is equivalent to the number of significant digits (correct?), but one wants to specify the scale, or the number of digits following the decimal point. I believe the scale is more appropriate for fixed-point arithmetic and the precision is more appropriate for arbitrary-precision arithmetic. Sanity check, does any of this make sense?
@mspiegel This absolutely makes sense to me, a complete description of a decimal data type needs two facets:
precision - the maximum number of significant decimal digits in the mantissa
scale - the maximum number of decimal digits to the right of the decimal point - may be specified as variable
This covers the SQL data type DECIMAL(p,s) - precision: p, scale: s - as well as decimal floating-point types such as DECFLOAT34 - precision: 34, scale: variable.
I'd love to have both precision and scale as new keywords for specifying numeric types in addition to the existing minimum, maximum, and multipleOf.
Tackling PR: #741
I think null is missing from this list
Closing this in favor of #845.
is "format": "byte" any different from "format": "base64"
Link1 ==>where it tells to use byte
Link2 ==> has an example use of base64
exactly which to use where
The spec text is normative, except for the examples.
byte is correct for OAS 3.0.x - though as formatis an open-ended field, base64is also an allowable value.
| gharchive/issue | 2016-03-21T09:08:07 | 2025-04-01T04:32:53.673074 | {
"authors": [
"DavidBiesack",
"IvanGoncharov",
"MikeRalphson",
"amarzavery",
"ePaul",
"fehguy",
"mma5997",
"mspiegel",
"noirbizarre",
"ralfhandl",
"webron",
"whitlockjc"
],
"repo": "OAI/OpenAPI-Specification",
"url": "https://github.com/OAI/OpenAPI-Specification/issues/607",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1989481389 | pre-commit: some hooks not fails(pylint-odoo). ruff has something to do with it?
Describe the bug
I made a module for example with some inconsistencies, but pre-commit with ruff not fails
To Reproduce
Affected versions:
new linter with ruff
Steps to reproduce the behavior:
Generate a new Repo using oca-addons-repo-template
Command executed
copier copy --UNSAFE https://github.com/OCA/oca-addons-repo-template.git repo_with_ruff
Answer questions and enable ruff
Add a module with some code and run pre-commit run -a
Expected behavior
pylint-odoo must be fails because:
Manifest version is wrong(16.0 instead of 17.0)
Line to long
Text untranslate
Additional context
Please see repo where pre-commit not fail
https://github.com/celm1990/repo_with_ruff/actions/runs/6841841712/job/18602672369
@sbidoul can you give feedback if any additional config is need? I only answer but not add additional_ruff_rules
UPDATE:
I have updated the pre-commit configuration to utilize flake8 instead of ruff. Now, the check for maximum length is functioning as expected. You can view the results at the following link:
https://github.com/celm1990/repo_with_ruff/actions/runs/6842006335/job/18603029663
So, It seems that 'ruff' is missing some configuration @sbidoul?.
However, there is an issue with pylint-odoo; I observed a new release https://github.com/OCA/pylint-odoo/releases/tag/v9.0.1 but the repository is currently using https://github.com/OCA/pylint-odoo/releases/tag/v8.0.19
UPDATE 2:
pylint-odoo 7.0.2(used to Odoo V16) is working, So I suspicious some change from 7.0.2 to V8.0 is not working @moylop260 can you review please, I not found CHANGELOG on pylint-odoo
I'm enabling E501 in #231
Regarding pylint-odoo, I don't see how it's behaviour could be influenced by the ruff config. What make you think it does?
As far as I can tell, the configuration is identical as for 16.0 (no one proposed enhancements for 17), so if a problem is present, it was likely present in 16 too.
I'm enabling E501 in #231
Thanks so much
Regarding pylint-odoo, I don't see how it's behaviour could be influenced by the ruff config. What make you think it does?
I'm suspicious, but based on my latest comment, the change from v8.0.5 to v8.0.6 could alter behavior, possibly due to this commit. Sorry for mixin with ruff
I close this issue
error related to isort and ruff is working now, thanks @sbidoul
I create a new issue for pylint-odoo https://github.com/OCA/pylint-odoo/issues/476
| gharchive/issue | 2023-11-12T16:38:44 | 2025-04-01T04:32:53.844657 | {
"authors": [
"celm1990",
"sbidoul"
],
"repo": "OCA/oca-addons-repo-template",
"url": "https://github.com/OCA/oca-addons-repo-template/issues/230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
285326472 | missing files
Hi
first of all thanks for this great offer.
I'm having some problems starting the bot from the source.
There are a few problems.
I have little idea about nodejs, so please forgive me if I ask stupid questions.
common/help. json is missing
coins_filtered. json is missing
keys. api is missing.
I created the coins_filtered. json and help. json myself.
Unfortunately, I don't know exactly how the keys. api has to look like.
Would it be possible to provide an example keys. api?
I know this file is individual but an example would be helpful.
salutation
DS
You can get coins.json and coins_filtered.json if you run the getCoins.js file. The keys file looks like
{
"dbots" : [""],
"infura": [""],
"polo": [
"",
""
],
"discord": "",
"etherscan": "",
"bittrex": [
"",
""
],
"coinbase": [
"",
""
],
"tsukibot": ""
}
Of course you need to fill out the fields with your own API keys. If two keys are required, the first field is the public key.
great now it works!
| gharchive/issue | 2018-01-01T20:14:00 | 2025-04-01T04:32:53.971032 | {
"authors": [
"OFRBG",
"december-soul"
],
"repo": "OFRBG/TsukiBot",
"url": "https://github.com/OFRBG/TsukiBot/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
782672859 | microscopy plugin should handle non-pyramid instances
The WSI series can contain instances that are not part of the pyramid, such as a thumbnail or a picture of the slide label. Currently OHIF silently ignores these.
Example data is here:
https://idc-sandbox-000.firebaseapp.com/projects/idc-tcia/locations/us-central1/datasets/tcia-idc-datareviewcoordination/dicomStores/DICOM_WSI-20210108
Possible we should show them in the series list as if they were different series or perhaps they should be selectable using a panel on the right, like segments are.
Probably the best would be to show them as different series
Probably the best would be to show them as different series
We upgraded the DMIV to very recent version, I'm wondering if this issue is resolved actually
@pieper now with SLIM, do we want to close this issue?
@pieper do we want to migrate this issue to SLIM?
Hi @igoroctaviano I think this one is obsolete. I'm pretty sure Slim already handles this case by putting the label images to the side. I'll close it but if you guys think it should be handled differently in DMV we can reopen or make a new issue.
| gharchive/issue | 2021-01-09T19:43:27 | 2025-04-01T04:32:54.012987 | {
"authors": [
"Punzo",
"igoroctaviano",
"pieper",
"sedghi"
],
"repo": "OHIF/Viewers",
"url": "https://github.com/OHIF/Viewers/issues/2232",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2587330871 | Failure Mode and Effect Analysis Table Added
Note: references to security requirements will have to be updated
| gharchive/pull-request | 2024-10-15T00:17:27 | 2025-04-01T04:32:54.026064 | {
"authors": [
"matpetro"
],
"repo": "OKKM-insights/OKKM.insights",
"url": "https://github.com/OKKM-insights/OKKM.insights/pull/131",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
269312778 | Issues with 'bash opensource-install.sh -u true' script (trying to expose both containers on 80 port)
I need to move my Onlyoffice from one server to another
Steps to reproduce.
Copy /app folder from server A to /app folder on server B
On server B start bash opensource-install.sh -u true
Expected result.
Running 3 containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6cbc2496795a mysql:5.5 "docker-entrypoint..." 31 seconds ago Up 28 seconds 3306/tcp onlyoffice-mysql-server
38fb09135080 onlyoffice/documentserver:4.4.3.7 "/bin/sh -c 'bash ..." About a minute ago Up About a minute 80->80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
3c32d2d0878a onlyoffice/communityserver:9.1.0.418 "/usr/bin/dumb-ini..." 52 seconds ago Up 50 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 3306/tcp, 5280/tcp, 9865-9866/tcp, 9871/tcp, 9882/tcp, 0.0.0.0:5222->5222/tcp, 9888/tcp onlyoffice-community-server
Actual result.
Running only 2 containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6cbc2496795a mysql:5.5 "docker-entrypoint..." 31 seconds ago Up 28 seconds 3306/tcp onlyoffice-mysql-server
3c32d2d0878a onlyoffice/communityserver:9.1.0.418 "/usr/bin/dumb-ini..." 52 seconds ago Up 50 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 3306/tcp, 5280/tcp, 9865-9866/tcp, 9871/tcp, 9882/tcp, 0.0.0.0:5222->5222/tcp, 9888/tcp onlyoffice-community-server
Notes
After trying to install onlyoffice-document-server manually by
bash opensource-install.sh -ics false -ids true -ims false -es true
I've got an error The following ports must be open: 80
sudo netstat -lnp | grep ':80' tcp6 0 0 :::80 :::* LISTEN 1317/docker-proxy
It seems to me that bash opensource-install.sh -u true install and configure communityserver and document-server containers on the same 80 tcp port.
On my A server containers running with following exposed ports
0.0.0.0:80->80/tcp for communityserver
80->80/tcp for document server
On server B docker trying to exposed ports
0.0.0.0:80->80/tcp for communityserver
0.0.0.0:80->80/tcp for document server
More information here
http://dev.onlyoffice.org/ru/viewtopic.php?p=22482&sid=a26f78db713b90fba3b757ca43ea939f#p22482
Hello kovalroma!
To move your Onlyoffice from one server to another you need
Copy /app folder from server A to /app folder on server B - in case if there are the same versions of MySQL DB (server A uses MySQL 5.5 - server B should have the same MySQL 5.5)
or
You should make dumpfile of MySQL and restore dump in MySQL container. Also you will need to copy /var/www/onlyoffice/Data (this folder is in the Community Server's container) into /app/onlyoffice/CommunityServer/data (this folder appear after installing Onlyoffice Enterprise Edition on your server)
When you copied app/onlyoffice folder to server B you shouldn't start bash opensource-install.sh -u true
you should install Onlyoffice Community Edition but not to update (-u true parameter) - just execute bash opensource-install.sh. Also there is a mistake when you tried to installed the Document Server manually with -es true parameter. This parameter installs the Document Server as External Server .
So you should
Copy /app/onlyoffice folder from server A to server B
And install Onlyoffice Community Edition without -u true parameter
bash opensource-install.sh
It is supposed that there is no Onlyoffice containers installed before on server B.
| gharchive/issue | 2017-10-28T12:13:13 | 2025-04-01T04:32:54.049155 | {
"authors": [
"Djmaximus",
"kovalroma"
],
"repo": "ONLYOFFICE/Docker-CommunityServer",
"url": "https://github.com/ONLYOFFICE/Docker-CommunityServer/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
720066936 | I need the ability to define rel attributes for list links
I need the ability to define rel attributes for the list links.
For example, social links may want something along the lines of rel="noreferrer external".
Originally posted by @trevorsaint in https://github.com/ONSdigital/design-system/issues/233#issuecomment-707627348
Add ability ability to define rel attributes for list links
| gharchive/issue | 2020-10-13T09:49:40 | 2025-04-01T04:32:54.080671 | {
"authors": [
"rmccar",
"trevorsaint"
],
"repo": "ONSdigital/design-system",
"url": "https://github.com/ONSdigital/design-system/issues/1087",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
722429228 | Add new Footer warning
What is the context of this PR?
Adds the new Footer warning component
How to review
Check component looks and works how it should
Add warning panel option to pre-footer
| gharchive/pull-request | 2020-10-15T15:14:33 | 2025-04-01T04:32:54.082448 | {
"authors": [
"rmccar"
],
"repo": "ONSdigital/design-system",
"url": "https://github.com/ONSdigital/design-system/pull/1099",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
970549406 | Added 'writing page titles' content guide
What is the context of this PR?
Added new page to 'get started' section about How to write page titles
Grouped all the guides that aren't related to 'installing' into a new subgroup
Renamed the design assets and browser compatibility page titles
How to review
Check docs
Add documentation about descriptive page titles
| gharchive/pull-request | 2021-08-13T16:22:13 | 2025-04-01T04:32:54.084434 | {
"authors": [
"jrbarnes9"
],
"repo": "ONSdigital/design-system",
"url": "https://github.com/ONSdigital/design-system/pull/1640",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2258477692 | Better acceptance test metadata feedback
What
Added the diff function form the dictdiffer library and the printing the results as a list. This way only the differences will be printed
The results are recorded before the assertion otherwise it did not print the list of differences.
How to review
Just run the behave test (as currently it is checking for a file that will have multiple differences) and this will be only till the ticket is approved.
Who can review
anyone
closing, merged on a different branch for commit verification.
| gharchive/pull-request | 2024-04-23T10:09:17 | 2025-04-01T04:32:54.086278 | {
"authors": [
"mikeAdamss",
"nimshi89"
],
"repo": "ONSdigital/dp-data-pipelines",
"url": "https://github.com/ONSdigital/dp-data-pipelines/pull/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
507208962 | Improve logging, docstrings and tidyup
What? and Why?
Some of the logging around checking if the submission has finished anti virus scanning was lacking (especially around the continuous retries and what happens if it exceeds the max attempts). This PR adds more logging and metadata around this event to make diagnosing issues easier.
Also removed an unused exception class, tidied up various bits of logging in the application and added a few docstrings
Checklist
[x] CHANGELOG.md updated? (if required)
Just added do not merge label as I need to do a quick test in preprod to make sure nothing is broken
| gharchive/pull-request | 2019-10-15T12:22:54 | 2025-04-01T04:32:54.093996 | {
"authors": [
"insacuri"
],
"repo": "ONSdigital/sdx-seft-consumer-service",
"url": "https://github.com/ONSdigital/sdx-seft-consumer-service/pull/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
281588683 | ASE: Moving some global variables declaration out of ase_common.h
ASE: Moving some global variables declaration out of ase_common.h with some other changes.
This changes has been validated on external GitHub with help_fpga test
Will check it on Jenkin soon
Jenkins' ase-testing tests passed for this pull requests.
It's okay to merge this one. In the future, can we also have a link to PSG regtests in the review comments?
Yes, for the next pull requests with Windows support, I will run full regression tests on MCP and DCP
| gharchive/pull-request | 2017-12-13T00:52:51 | 2025-04-01T04:32:54.140543 | {
"authors": [
"deepakcu84",
"lzhan55"
],
"repo": "OPAE/opae-sdk",
"url": "https://github.com/OPAE/opae-sdk/pull/96",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
569293942 | DocsArchive-15.0.0.0Auto archive content
Please don't merge this PR until target archive repo PR https://github.com/OPS-E2E-PPE/docs-archive-test-target/pull/423 is merged into live branch.
Auto archive content to https://github.com/OPS-E2E-PPE/docs-archive-test-target.git
Docs Build status updates of commit 34f59dc:
:x: Validation status: errors
Please follow instructions here which may help to resolve issue.
File
Status
Preview URL
Details
:x:Error
Details
[Error]
Cannot sync git repo to specified commit because branch Release_Archive_master_2020-02-22-14-57-58 has been deleted or has been force pushed: fatal: couldn't find remote ref refs/heads/Release_Archive_master_2020-02-22-14-57-58
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
| gharchive/pull-request | 2020-02-22T07:00:30 | 2025-04-01T04:32:54.223491 | {
"authors": [
"VSC-Service-Account",
"e2ebd3"
],
"repo": "OPS-E2E-PPE/docs-archive-test-source",
"url": "https://github.com/OPS-E2E-PPE/docs-archive-test-source/pull/595",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1129777647 | DocsArchive-15.0.0.0Auto archive content
Please don't merge this PR until target archive repo PR https://github.com/OPS-E2E-PPE/docs-yml-archive-non-versioning-archive/pull/463 is merged into live branch.
Auto archive content to https://github.com/opstest2/docs-yml-archive-non-versioning-archive.git
Docs Build status updates of commit 46d6f09:
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
.openpublishing.redirection.json
:white_check_mark:Succeeded
docs/TOC.yml
:white_check_mark:Succeeded
View
docs/index.yml
:white_check_mark:Succeeded
View
docs/to-be-archive/level1/article3-1-1.md
:white_check_mark:Succeeded
View
docs/to-be-archive/level1/article3-1-2.md
:white_check_mark:Succeeded
View
docs/to-be-archive/level1/article3-1-3.yml
:white_check_mark:Succeeded
View
docs/to-be-archive/level1/sql-database-engine.png
:white_check_mark:Succeeded
docs/to-be-archive/level2/article3-2-1.md
:white_check_mark:Succeeded
View
docs/to-be-archive/level2/article3-2-2.md
:white_check_mark:Succeeded
View
docs/to-be-archive/level2/article3-2-3.yml
:white_check_mark:Succeeded
View
For more details, please refer to the build report.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
For any questions, please:Try searching the docs.microsoft.com contributor guidesPost your question in the Docs support channel
| gharchive/pull-request | 2022-02-10T10:18:12 | 2025-04-01T04:32:54.238864 | {
"authors": [
"huangmin-ms",
"opstest2"
],
"repo": "OPS-E2E-PPE/docs-yml-archive-non-versioning",
"url": "https://github.com/OPS-E2E-PPE/docs-yml-archive-non-versioning/pull/463",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
464745780 | Point search clean up
Simplify code in PointSearch. Mainly move code around and return the output of functions instead of passing the output by reference. We might move the function sendDataAcrossNetwork to ArborX. There are not that many changes but because it looks by because of the indentations. You should hide the whitespace changes when reviewing this PR.
ping
Note that jenkins is failing because it couldn't reach codecov.org
Codecov Report
Merging #560 into master will decrease coverage by <.01%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #560 +/- ##
==========================================
- Coverage 95.9% 95.89% -0.01%
==========================================
Files 86 86
Lines 6027 6017 -10
==========================================
- Hits 5780 5770 -10
Misses 247 247
Impacted Files
Coverage Δ
...ckages/Discretization/src/DTK_PointSearch_decl.hpp
100% <ø> (ø)
:arrow_up:
...ackages/Discretization/src/DTK_PointSearch_def.hpp
99.23% <100%> (-0.03%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0a9348...c1a8c6e. Read the comment docs.
| gharchive/pull-request | 2019-07-05T19:00:18 | 2025-04-01T04:32:54.258789 | {
"authors": [
"Rombur",
"codecov-io"
],
"repo": "ORNL-CEES/DataTransferKit",
"url": "https://github.com/ORNL-CEES/DataTransferKit/pull/560",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1775149573 | Add navigation to the navbar in Contact Us page
Current Behavior
Currently in the Contact Us page, the navigation doesn't work. It shows error or doesnt open at all
Desired Behavior
I will add navigation to the Contact Us page navbar
Screenshots / Mockups
@Vaishnavi-Patil2211 Can you please assign this to me?
| gharchive/issue | 2023-06-26T16:23:13 | 2025-04-01T04:32:54.263766 | {
"authors": [
"PoulavBhowmick03"
],
"repo": "OSCode-Community/OSCodeCommunitySite",
"url": "https://github.com/OSCode-Community/OSCodeCommunitySite/issues/912",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
808059585 | [英語原文] Rasdaman quickstartにリンク不適切
問題の説明
rasdaman quickstart のリンク先について、認証をもとめられるため、チュートリアルとしては不適切。
Explore the rasdaman documentation <https://doc.rasdaman.org>_ to learn about rasdaman administration, its query language, and data ingestion.
https://doc.rasdaman.org が不適切
@miurahr
上記ですが、私の環境(macOS Catalina, Chrome)からだと認証プロセスなく、ログイン可能でした。
ご利用のブラウザは、Linux上のFirefoxになりますか?
また、別件で、訳語の先頭に半角の空白スペースがあると、Sphinxビルド時に認識されずに英語原文のままとなるようです。
最後の2文が英語原文のままと認識されてましたので、念のため、報告します。
Rasdaman クイックスタート — OSGeoLive 14.0 Documentation
あれ、いまはみえました。一時的なものだったようです。
上記リンクの件、了解です。
下記で少し修正を加えましたので、念のため報告します。
https://github.com/OSGeo-jp/OSGeoLive-doc-omegat/commit/7db1620c245a71abca181ce3a33f19070ea8e5a9
| gharchive/issue | 2021-02-14T22:14:59 | 2025-04-01T04:32:54.270036 | {
"authors": [
"miurahr",
"sanak"
],
"repo": "OSGeo-jp/OSGeoLive-doc-omegat",
"url": "https://github.com/OSGeo-jp/OSGeoLive-doc-omegat/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1190355200 | longitude latitude order
I haven't looked at the proj4 jni since way back when it was still in the main repo. I notice that it seems the order of longitude latitude has been flipped? Maybe I'm mistaken?
Here in the example for proj-jin you've got lat lon:
https://github.com/OSGeo/PROJ-JNI/blob/main/example/TransformPoints.java#L83
But in the old proj jni it was the opposite ((λ,φ) axis order):
https://github.com/OSGeo/PROJ/blame/5.2/jniwrap/README.md#L109
Did something change in proj? Or are the jni wrappers different? If I prefer x,y ordered data what files can I change to prevent myself from having to reorder my arrays?
ooof. nevermind. I found this:
With PROJ 6, the order of coordinates for EPSG geographic coordinate reference systems is latitude first, longitude second.
That's got to be the stupidest change I've witnessed in an API. can't understand why they wanted that.
Hello David. Actually the problem was in PROJ 4 which was not compliant with EPSG definitions. For example EPSG:4326 has always been (latitude, longitude) in EPSG database, and PROJ 4 doing otherwise was causing interoperability problems. It has been a very long debate in the geospatial community (not only PROJ). The conclusion was (with my own words, I do not have the exact OGC wording in mind):
Comply with EPSG definition, or if not do not use the "EPSG" name; use another name at your choice.
Having PROJ 6 finally compliant with this rule improve situation a lot in the geospatial community at large. If nevertheless the (longitude, latitude) axis order is desired, the PROJ.normalizeForVisualization(…) method can be invoked.
| gharchive/issue | 2022-04-01T23:00:04 | 2025-04-01T04:32:54.274943 | {
"authors": [
"davidraleigh",
"desruisseaux"
],
"repo": "OSGeo/PROJ-JNI",
"url": "https://github.com/OSGeo/PROJ-JNI/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2481342890 | Upgrade dependency
Upgrade jsonwebtoken@8.5.1 to version 9.0.2
Please ignore this PR and close it. Generated by TestIM
| gharchive/pull-request | 2024-08-22T17:19:45 | 2025-04-01T04:32:54.344583 | {
"authors": [
"PashaPal1974"
],
"repo": "OX-Security-Demo/Multi-currency-management",
"url": "https://github.com/OX-Security-Demo/Multi-currency-management/pull/1694",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
642198728 | [8.0.1] Landlord kick loop
[16:15:12] [Server thread/ERROR] [FML]: SimpleChannelHandlerWrapper exception io.netty.handler.codec.EncoderException: java.lang.RuntimeException: Undefined discriminator for message type com.pixelmonmod.pixelmon.comm.packetHandlers.customOverlays.CustomScoreboardDisplayPacket in channel pixelmon at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107) ~[MessageToMessageEncoder.class:?] at io.netty.handler.codec.MessageToMessageCodec.write(MessageToMessageCodec.java:116) ~[MessageToMessageCodec.class:?] at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738) ~[AbstractChannelHandlerContext.class:?] at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:801) ~[AbstractChannelHandlerContext.class:?] at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814) ~[AbstractChannelHandlerContext.class:?] at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[AbstractChannelHandlerContext.class:?] at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:837) ~[AbstractChannelHandlerContext.class:?] at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1036) ~[DefaultChannelPipeline.class:?] at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:304) ~[AbstractChannel.class:?] at net.minecraftforge.fml.common.network.simpleimpl.SimpleNetworkWrapper.sendTo(SimpleNetworkWrapper.java:250) ~[SimpleNetworkWrapper.class:?] at com.hiroku.common.landlord.command.MapCommand.lambda$executeSpongeCommand$1(MapCommand.java:150) ~[MapCommand.class:?] at org.spongepowered.common.scheduler.SchedulerBase.lambda$startTask$0(SchedulerBase.java:197) ~[SchedulerBase.class:1.12.2-2838-7.2.2] at org.spongepowered.common.scheduler.SyncScheduler.executeTaskRunnable(SyncScheduler.java:74) ~[SyncScheduler.class:1.12.2-2838-7.2.2] at org.spongepowered.common.scheduler.SchedulerBase.startTask(SchedulerBase.java:188) ~[SchedulerBase.class:1.12.2-2838-7.2.2] at org.spongepowered.common.scheduler.SchedulerBase.processTask(SchedulerBase.java:174) ~[SchedulerBase.class:1.12.2-2838-7.2.2] at java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4707) [?:1.8.0_201] at org.spongepowered.common.scheduler.SchedulerBase.runTick(SchedulerBase.java:112) [SchedulerBase.class:1.12.2-2838-7.2.2] at org.spongepowered.common.scheduler.SyncScheduler.tick(SyncScheduler.java:47) [SyncScheduler.class:1.12.2-2838-7.2.2] at org.spongepowered.common.scheduler.SpongeScheduler.tickSyncScheduler(SpongeScheduler.java:189) [SpongeScheduler.class:1.12.2-2838-7.2.2] at org.spongepowered.mod.SpongeMod.onTick(SpongeMod.java:457) [SpongeMod.class:1.12.2-2838-7.2.2] at net.minecraftforge.fml.common.eventhandler.ASMEventHandler_26_SpongeMod_onTick_ServerTickEvent.invoke(.dynamic) [?:?] at net.minecraftforge.fml.common.eventhandler.ASMEventHandler.invoke(ASMEventHandler.java:90) [ASMEventHandler.class:?] at net.minecraftforge.fml.common.eventhandler.EventBus.forgeBridge$post(EventBus.java:753) [EventBus.class:?] at net.minecraftforge.fml.common.eventhandler.EventBus.post(EventBus.java:703) [EventBus.class:?] at net.minecraftforge.fml.common.FMLCommonHandler.onPreServerTick(FMLCommonHandler.java:279) [FMLCommonHandler.class:?] at net.minecraft.server.MinecraftServer.func_71217_p(MinecraftServer.java:657) [MinecraftServer.class:?] at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:526) [MinecraftServer.class:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201] Caused by: java.lang.RuntimeException: Undefined discriminator for message type com.pixelmonmod.pixelmon.comm.packetHandlers.customOverlays.CustomScoreboardDisplayPacket in channel pixelmon at net.minecraftforge.fml.common.network.FMLIndexedMessageToMessageCodec.encode(FMLIndexedMessageToMessageCodec.java:76) ~[FMLIndexedMessageToMessageCodec.class:?] at io.netty.handler.codec.MessageToMessageCodec$1.encode(MessageToMessageCodec.java:67) ~[MessageToMessageCodec$1.class:?] at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89) ~[MessageToMessageEncoder.class:?]
Fixed
| gharchive/issue | 2020-06-19T20:17:48 | 2025-04-01T04:32:54.361653 | {
"authors": [
"Hiroku",
"Rasgnarok"
],
"repo": "ObliqueNET/Server",
"url": "https://github.com/ObliqueNET/Server/issues/548",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2274414590 | Problem with Shoulder Surfing and throwing weapons (1.19.2)
There seems to be a problem when you use a throwing weapon (tomahawks, throwing knifes, etc) and using Shoulder Surfing Reloaded.
Crash Report
Players with the shoulder surfing mod installed cant even log into the server if they have a throwing weapon equipped, I have to always edit it out of their inventories with NBTExplorer.
I've done some reading on this issue already, but was unable to make it work.
I tried to not use the new crosshair, as stated in the previous issue that the crosshair was causing the problem, but still with no success:
i got the same error, i tried absolutely everything i could, and by the looks of it, is every weapon that has "ammo" that you can throw like knifes, javelins.. i the devs will need to update for them to work together!
Hey,
Yeah the issue is due to an update to Shoulder Surfing Reloaded that changed where certain parts of code are located.
In the meantime, you might need to roll back the version of Shoulder Surfing Reloaded to the version supported. I think I was using version 2.9.7 (the one before version 3.0.0).
I checked the changelog for version 3.0.0 and yeah... it does mention 3rd party plugins breaking.
Also, I figured out this also affects the 1.18.2 version as well.
This will be fixed in the next update.
Hey, Yeah the issue is due to an update to Shoulder Surfing Reloaded that changed where certain parts of code are located. In the meantime, you might need to roll back the version of Shoulder Surfing Reloaded to the version supported. I think I was using version 2.9.7 (the one before version 3.0.0). I checked the changelog for version 3.0.0 and yeah... it does mention 3rd party plugins breaking. Also, I figured out this also affects the 1.18.2 version as well. This will be fixed in the next update.
thanks for the info!
I've uploaded a fix for this issue that should go live soon. Hopefully this is the last time Shoulder Surfing Reloaded causes a crash with this one lol
Closing issue
Oh no... I opened my big mouth... I tested this fix with version 4.0+ of Shoulder Surfing Reloaded and guess what? It crashed... AGAIN... and I'm gonna have to fix it... yet again
Fixed it again in the new 1.19.2 and the 1.20.1 versions. Yeah that's out now.
Closing issue
| gharchive/issue | 2024-05-02T01:02:46 | 2025-04-01T04:32:54.369028 | {
"authors": [
"Gabrielzin0",
"ObliviousSpartan",
"YuriFernandes150"
],
"repo": "ObliviousSpartan/SpartanWeaponry",
"url": "https://github.com/ObliviousSpartan/SpartanWeaponry/issues/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
204130526 | Fix go package imports
While testing the go implementation, I've noticed that some imports doesn't work. I thought that go reference repo like github.com/foo/bar only. I didn't know that you can also import packages form your project like github.com/foo/bar/baz. This should point to https://github.com/foo/bar/tree/master/baz or https://github.com/foo/bar/blob/master/baz/baz.go.
Other than the describe.only mentioned above, this looks great!
| gharchive/pull-request | 2017-01-30T21:10:55 | 2025-04-01T04:32:54.392042 | {
"authors": [
"josephfrazier",
"stefanbuck"
],
"repo": "OctoLinker/browser-extension",
"url": "https://github.com/OctoLinker/browser-extension/pull/266",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1339921784 | Add padding under the link so that build problems show nicely
Background
Previously, when builds had problems, the link and the build problems were jammed together - it didn't look pretty.
Results
We now stick a bucket load (that's a technical term right there) of spacing underneath it:
How to review this PR
eyeballs will be fine
Pre-requisites
[x] I have considered informing or consulting the right people
[x] I have considered appropriate testing for my change.
[sc-8922]
@liam-mackie
Hopefully we can resolve the problems with memory utilisation to bring this feature back!
Did you see #43 😄?
| gharchive/pull-request | 2022-08-16T07:19:48 | 2025-04-01T04:32:54.435807 | {
"authors": [
"matt-richardson"
],
"repo": "OctopusDeploy/opentelemetry-teamcity-plugin",
"url": "https://github.com/OctopusDeploy/opentelemetry-teamcity-plugin/pull/44",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
68362926 | Added support for a pages element in the provisioning model
See ProvisioningTemplate-2015-04-Sample-02.xml for an example on how the use the new pages element.
Amazing, just what I was looking for, cant wait to see it in the Main branch and the nuget package.
I have a question though:
In the provided xml example you use rows and columns, I suppose thats for wiki pages only, what about webpart pages where you normally use zone names to know where the webpart will be placed.
Thanks a lot for this, great effort from the team
| gharchive/pull-request | 2015-04-14T12:51:40 | 2025-04-01T04:32:54.446069 | {
"authors": [
"erwinvanhunen",
"levalencia"
],
"repo": "OfficeDev/PnP",
"url": "https://github.com/OfficeDev/PnP/pull/656",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1716887258 | Why res group Contribute permissions are not enough
Our developers do not have
'Microsoft.Authorization/roleAssignments/write role
What the reason why Contribute is not enough ?
Access management is forbidden for developers.
Hi @funzel1 ,
Thanks for raising the query. Could you please elaborate more on the issue
It is related to article
Goto the subscription page in Azure portal. Then, goto Access Control(IAM) and click on View my access button.
Click on your role and in search permissions text box, search for Microsoft.Authorization/roleAssignments/Write.
If your current role does not have the permission, then you can grant yourself the built in role User Access Administrator or create a custom role.
and error message if missing
az : ERROR: {"error":{"code":"InvalidTemplateDeployment","message":"Deployment failed with multiple errors: 'Authorization failed for template resource '**************cdcada' of type 'Microsoft.Authorization/roleAssignments'.
@funzel1 Please refer our Prerequisites. RBAC role is require to create and assign roles while creating the resources during deployment.
Only having contributor access will throw the errors while deployment and for now we don't have any workaround to it.
Understood, so I have to find a solution , like a pipeline with the required permissions or splitting the deployment
because role and permission management is a SOX controlled process in our company and reason why most people have only contribute.
You can close. Thanks a lot
| gharchive/issue | 2023-05-19T08:54:33 | 2025-04-01T04:32:54.457931 | {
"authors": [
"funzel1",
"gsv022",
"v-royavinash"
],
"repo": "OfficeDev/microsoft-teams-apps-company-communicator",
"url": "https://github.com/OfficeDev/microsoft-teams-apps-company-communicator/issues/1043",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2460862076 | Application Deployment Issue - Not creating tabs
Receiving this when trying to add the Authors app to teams:
It does not allow me to save and add/create the application.
Hi @Electric-Velcro, Please try in incognito / Inprivate windows of a Browser to check if it is a cache issue. Let us know if this works .
@Electric-Velcro ,
Could you please update us on the status of the issue?
hi @Electric-Velcro
Could you please update us on the status of the issue?
@Electric-Velcro
Could you please update us on the status of the issue? if issue is resolved please feel free to mark ticket as complete
@Electric-Velcro
Please let us know if you are still facing this issue and require assistance and we will be happy to discuss. If no further action is needed at the moment and no response is received, we will close out the issue in 2 business days.
Marking This ticket as closed
| gharchive/issue | 2024-08-12T12:27:51 | 2025-04-01T04:32:54.461236 | {
"authors": [
"Electric-Velcro",
"peddivyshnavi",
"tiwariShiv7",
"v-jaygupta"
],
"repo": "OfficeDev/microsoft-teams-apps-company-communicator",
"url": "https://github.com/OfficeDev/microsoft-teams-apps-company-communicator/issues/1524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
368581159 | Validator responds with ".. not contain a valid XML document..." if contains an invalid value
If you take valid_outlook.xml from https://github.com/OfficeDev/office-addin-validator/blob/master/manifest-to-test/valid_outlook.xml and change <DefaultLocale> from en-US to en-EN you get the following.
$ validate-office-addin valid_outlook.xml
Calling validation service. This might take a moment...
-------------------------------------
Validation: Failed
Error Code: 400
Error(s):
Request body does not contain a valid XML document, and/or is too large (capped at 256kb).
-------------------------------------
Expected result:
An error message explaining that <DefaultLocale> does not contain a valid value.
Closing all issues since this repo is being archived and no longer maintained.
| gharchive/issue | 2018-10-10T09:45:33 | 2025-04-01T04:32:54.477405 | {
"authors": [
"codebear",
"lindalu-MSFT"
],
"repo": "OfficeDev/office-addin-validator",
"url": "https://github.com/OfficeDev/office-addin-validator/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
603399925 | Mapping to graph url
How does one map the itemId to a microsoft graph URL?
Found, this, not sure if there's a better way... https://blog.mastykarz.nl/office-365-unified-api-mail/
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: fbb2ef5c-e278-6aba-6fe8-7a4cc0e357d5
Version Independent ID: ebc95637-b567-86b8-4f6d-ee9c6afee19e
Content: Office.context.mailbox.item - requirement set 1.5 - Office Add-ins
Content Source: docs/reference/objectmodel/requirement-set-1.5/office.context.mailbox.item.md
Product: outlook
Technology: add-ins
GitHub Login: @o365devx
Microsoft Alias: o365devx
@exextoc Can you take a look?
Thanks.
@xstos Thanks for your interest in Office Add-ins. We use the issues in this repo to track problems with the documentation. Consider raising this technical question on Stack Overflow and be sure to tag it office-js. That way the whole community will benefit from the answers that you get.
Thanks.
| gharchive/issue | 2020-04-20T17:16:05 | 2025-04-01T04:32:54.482256 | {
"authors": [
"ElizabethSamuel-MSFT",
"xstos"
],
"repo": "OfficeDev/office-js-docs-pr",
"url": "https://github.com/OfficeDev/office-js-docs-pr/issues/1768",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
536634124 | Is it Excel on the web or for the web
https://review.docs.microsoft.com/en-us/office/dev/scripts/overview/overview?branch=master
visit Office Scripts in Excel for the web.
Addressed by https://github.com/OfficeDev/office-scripts-docs/pull/46. Closing because this is for a different repo.
| gharchive/issue | 2019-12-11T21:51:55 | 2025-04-01T04:32:54.483326 | {
"authors": [
"AlexJerabek",
"sumurthy"
],
"repo": "OfficeDev/office-scripts-docs-reference",
"url": "https://github.com/OfficeDev/office-scripts-docs-reference/issues/11",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
441483532 | Add maxPointSize to MSLabel to specify a largest DynamicType size
For some UI elements, it doesn't make sense to scale indefinitely with the DynamicType size. This change allows us to scale up to a maximum point size, then maintain it even if the user selects a larger DynamicType size. This could be used for non-core text in a view, for example, the author's name and title.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: rimarsh sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
@joelklabo, in case you're interested.
Is there any reason not to just use the built in system API that scales a font based on the current accessibility settings up to a maximum font size? https://developer.apple.com/documentation/uikit/uifontmetrics/2877383-scaledfont
Nope, hadn’t noticed we had a minimum target of iOS 11. I’ll give that a try instead.
@markavitale, updated to use the system scaledFont method and a max point size instead of a size category. This produces the desired behavior, however, this scale function doesn't behave quite as I would expect.
The doc seems to say you should pass it a default sized font and it will scale that to the currently set size category. However, doing that causes the size to be too small when the user size class is smaller than the default and too big otherwise.
Here's one example of where it didn't quite work the way I expected. I was expecting some way of getting the 13.0 size since that's the desired size for my xLarge size category and text style .caption2.
(lldb) po defaultFont.pointSize
11.0
(lldb) po style.font.pointSize
13.0
(lldb) po style.systemStyle.metrics.scaledFont(for: defaultFont, maximumPointSize: maxPointSize).pointSize
15.0
(lldb) po style.systemStyle.metrics.scaledFont(for: style.font, maximumPointSize: maxPointSize).pointSize
17.0
Further oddity, the scaleValue function also doesn't seem to give results as expected. For example, at Large, it gives a 1.0 scaleValue which makes sense as this is the default. But at xLarge, it gives 1.33 for .caption2 which would be too large for the 11->13 change in font size and 1.0 for .headline which doesn't scale at all despite the desired font size changing from 17->19.
Do you have any ideas on how this method should be used?
Using this as the reference for expected sizes: https://developer.apple.com/design/human-interface-guidelines/ios/visual-design/typography/
I did some debugging and I guess the issue lies with the fact that you can't use these "scale font" APIs reliably on system fonts. The ones being made in Fonts.swift are system fonts. I guess the original approach of just checking the provided point size and manually overriding it is the best.
The relevant documentation from Apple's Documentation:
Use a UIFontMetrics object to support scalable custom fonts in your app.
We aren't using custom fonts, we're using system fonts. In order to use these APIs we would probably end up hardcoding the default font size for the system text styles, which seems like a mistake.
Thanks for digging, that's the conclusion I'd come to as well. I also agree that hardcoding the default font size is not a great idea, so I'll update this to check the point size and override it manually.
What is a technical problem with having a "large" font when user selects large content category? People want large so why stop them?
What particular text style are you worried about? Note that only .body supports XXXL size and "Larger Accessibility Sizes" feature in iOS, the rest don't scale to those sizes.
Did you talk to your designers about this "cap" for font sizes? Is this the guidance you get from them?
There's not really a technical problem with large font, we'd have to solve for scaling if we use any dynamic type. Some of the really large sizes to get hard to find a reasonable fallback, but still wouldn't call it a "technical" problem.
Our designers want various caps across all styles depending on the situation. I'm not sure I follow regarding only .body supporting the larger sizes. Following Apple's chart, it seems like all styles would continue to scale.
Correct, this instruction comes from our designers. They do not want all text fields to grow as much as they do by default. In some places, they will, but not all. I certainly agree we should be allowing a user to make their font larger when appropriate, but sometimes the result is less important fields make it impossible to read the core message.
You are right. When I checked this in Fall of 2016 only .body scaled for the extra accessibility categories, now it seems all fonts scale (just checked this in code with iOS 11).
Do you guys want to limit font size per MSLabel, or make it app-wide and limit at MSFonts level - for all text styles in Fabric?
I think per label because we do want to let some labels (primary content) grow to the largest setting. This change is to allow less important fields to not overtake the whole view.
@vladfilyakov, updated per your comments, please take another look when you have a chance.
@vladfilyakov, Fixed, looks like it dismissed your approval though.
Approved. Waiting for CI build. I will take a look at what's happening there.
/AzurePipelines run
| gharchive/pull-request | 2019-05-07T23:12:59 | 2025-04-01T04:32:54.503251 | {
"authors": [
"markavitale",
"msftclas",
"rimarsh",
"vladfilyakov"
],
"repo": "OfficeDev/ui-fabric-ios",
"url": "https://github.com/OfficeDev/ui-fabric-ios/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1015310128 | Add codespaces config
The intent here is to add config for codespaces that does not break our existing docker-compose dev env setup.
Seems to work! :smile:
The entrypoint needs an update to the updated working directory.
https://github.com/OfficeMomsandDads/scheduler/blob/23388a0ac65f152949eab220bf3f307ae9cd6d0e/docker/rails-entrypoint#L4
Thanks!
Ready for re-review. I'm also building from scratch both locally & in codespaces, just to ensure everything is working in both contexts one last time.
| gharchive/pull-request | 2021-10-04T15:15:32 | 2025-04-01T04:32:54.505115 | {
"authors": [
"benjaminwood",
"nvick"
],
"repo": "OfficeMomsandDads/scheduler",
"url": "https://github.com/OfficeMomsandDads/scheduler/pull/697",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170586107 | Third party library import resource failure
use 'com.android.support:appcompat-v7:24.1.1' ,Can normally generate BUCK file,Can also be normal generationapk,But after the operation return wrong,the error info is org.xmlpull.v1.XmlPullParserException: Binary XML file line #17 tag requires viewportWidth > 0
Use English please, there are several English speaker in maintainers. :)
use 'com.android.support:appcompat-v7:24.1.1' ,Can normally generate BUCK file,Can also be normal generationapk,But after the operation return wrong,the error info is org.xmlpull.v1.XmlPullParserException: Binary XML file line #17 tag requires viewportWidth > 0
Does your gradle build works well?
By the way, change this issue title please.
yes,Using Android Studio can normally run :)
I use Android Studio create a new Empty Activity ,and modify gradle file,add signingConfigs,the app
gradle is this:
apply plugin: 'com.android.application'
android {
compileSdkVersion 24
buildToolsVersion "24.0.1"
defaultConfig {
applicationId "com.shine.bucktest"
minSdkVersion 15
targetSdkVersion 24
versionCode 1
versionName "1.0"
}
signingConfigs {
release {
storeFile file('../debug.keystore')
storePassword 'android'
keyAlias 'androiddebugkey'
keyPassword 'android'
}
}
buildTypes {
debug {
signingConfig signingConfigs.release
}
release {
minifyEnabled false
signingConfig signingConfigs.release
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
testCompile 'junit:junit:4.12'
compile 'com.android.support:appcompat-v7:24.1.1'
}
at the root gradle.build is
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.1.2'
classpath 'com.github.piasy:okbuck-gradle-plugin:1.0.0-beta9'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
apply plugin: 'com.github.piasy.okbuck-gradle-plugin'
okbuck {
overwrite true
resPackages = [
app: 'com.shine.bucktest',
]
}
gotcha! You use the wrong plugin, see README closely :)
I have use Java 8 lambda in my project, so I use okbuck-gradle-plugin:1.0.0-beta9, you think I shoud use 0.4.0? It is newer?
I have update root gradle classpath 'com.github.okbuilds:okbuild-gradle-plugin:0.4.0' android apply plugin: 'com.github.okbuilds.okbuck-gradle-plugin'
okbuck {
overwrite true
resPackages = [
app: 'com.shine.bucktest',
]
}
but ./gradlew okbuck -info Error occurred,the error info is
* Where:
Build file '/home/jiangcy/AndroidStudioProjects/BuckTest/build.gradle' line: 28
* What went wrong:
A problem occurred evaluating root project 'BuckTest'.
> Could not find method overwrite() for arguments [true] on root project 'BuckTest'.
* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output.
BUILD FAILED
Follow the full guide in README, please.
OK,I would try tomorrow,thank you verymach
Latest installation instructions in README
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.github.okbuilds:okbuild-gradle-plugin:0.5.3'
}
}
apply plugin: 'com.github.okbuilds.okbuck-gradle-plugin'
Please re-open if it is happening in latest version. Closing for now
| gharchive/issue | 2016-08-11T07:33:34 | 2025-04-01T04:32:54.521630 | {
"authors": [
"Piasy",
"jiangchunyu",
"kageiit"
],
"repo": "OkBuilds/OkBuck",
"url": "https://github.com/OkBuilds/OkBuck/issues/154",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
489922636 | custom object detection - possible bug on decode_netout function
Hello, I have been reading the code multiple times trying to make it faster. However, the first step is to understand it properly before proposing any change.
In particular, this line 1239 seems to contain a bug. I'll copy above the current code to exaplain why is it wrong, from my point of view:
def decode_netout(self, netout, anchors, obj_thresh, net_h, net_w):
grid_h, grid_w = netout.shape[:2]
nb_box = 3
# (1): From this line, netout is a numpy array of 4 dimentions
netout = netout.reshape((grid_h, grid_w, nb_box, -1))
nb_class = netout.shape[-1] - 5
boxes = []
netout[..., :2] = self._sigmoid(netout[..., :2])
netout[..., 4:] = self._sigmoid(netout[..., 4:])
netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:]
netout[..., 5:] *= netout[..., 5:] > obj_thresh
for i in range(grid_h * grid_w):
row = i / grid_w # small comment: using '//' instead of '/' makes it unnecesary to cast results to int every time in the future (just replace float operation by its int version)
col = i % grid_w
for b in range(nb_box):
# 4th element is objectness score
# (2): because of (1), the nextline will let objectness as a single np.float value
objectness = netout[int(row)][int(col)][b][4]
# (3): the following line contains the error
if (objectness.all() <= obj_thresh): continue
# Do more things
pass
return boxes
Here goes the long explanation on the error of (3):
because of (2) objectness.all() will return always True, except for the case where objectness is 0, (exactly zero) in which case it will return False.
Then, when <= operator is applied, when .all() evaluates to False, it is casted to integers as 0, as obj_thresh usually is a value in range (0..1) (0.5 by default), so this evaluates to if 1 <= 0.3: continue, and therefore, every single value is being taken.
basically what is happening is that we are only skipping those cases in which objectness is exactly zero, and keep al the others.
In addition to this, I refer to the original implementation provided by @OlafenwaMoses and it is a little bit different (and correct) from my point of view.
question: is there a reason for such strange behavior? may the same issue be replicated in other methods?
I'll write a PR soon to change it, but would like to know @OlafenwaMoses view on the issue
Thanks
Thanks very much for this thorough review @rola93 . During the implementation for training custom YOLOv3 models, there were so many things to figure out as I raise against the time I promised in #8 , which was the 2nd deadline foe delivery. Some of this things were:
providing a seamless, simple experience for the training
providing sufficient methods to evaluate trained models
compatibility in previous and new YOLOv3 code.
I admit code optimization wasn't fully implemented at the time, and even after the release, documentation and tutorials took a lot of time as well.
I will review the PR and evaluate the changes. Thanks very much once again @rola93
Will close this issue since the imain point was solved & merged and the related/still open points are being talked on #352
| gharchive/issue | 2019-09-05T18:16:38 | 2025-04-01T04:32:54.527491 | {
"authors": [
"OlafenwaMoses",
"rola93"
],
"repo": "OlafenwaMoses/ImageAI",
"url": "https://github.com/OlafenwaMoses/ImageAI/issues/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
266150099 | Tests seems not to run in isolation
Copied from https://github.com/mscharhag/oleaster/issues/16
The default JUnit runner isolate tests in a way that every test gets executed with a new test-case instance. This ensures a fresh fixture instance for each test, even if the developer uses an implicit setup shortcut.
It seems to me that Oleaster tests work always with the same fixture in such a case. This is because the test-case instance is reused for all test runs (see snippet below for a better understanding of the use-case).
This behavior might lead to problems which are difficult to understand. Although I am not a particular friend of this setup practice, I doubt that people will follow a do-not-use-this-kind-of-implicit-setup convention. Maybe it would be better to ensure this kind of isolation as JUnit does by default.
@RunWith( OleasterRunner.class )
public class OleasterTest {
Object fixture = new Object();
{
it( "first", () -> {
System.out.println( "first: " + fixture.hashCode() );
} );
it( "second", () -> {
System.out.println( "second: " + fixture.hashCode() );
} );
}
}
mscharhag said:
Thanks for bringing this up. You are right, all Oleaster tests use the same test instance at the moment.
The problem here is that lambda expressions are bound to the scope of the surrounding object. In order to run specs in complete isolation
a new test instance is required for each spec
the code of all surrounding suites needs to be executed for the new test instance
So far I think this would be a good thing. However, I need to think a bit more about the consequences.
I will look into this.
| gharchive/issue | 2017-10-17T14:35:38 | 2025-04-01T04:32:54.530458 | {
"authors": [
"bangarharshit"
],
"repo": "OleasterFramework/Oleaster",
"url": "https://github.com/OleasterFramework/Oleaster/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
512819404 | isAcceptJson should probably always check against type
https://github.com/OlliV/micri/blob/9000c8bf1bd9543fe2c847553908ca55b62ba0a2/src/serve.ts#L103
We might want to use https://www.npmjs.com/package/@hapi/accept as content-type cannot parse a header with multiple types.
| gharchive/issue | 2019-10-26T10:27:25 | 2025-04-01T04:32:54.554065 | {
"authors": [
"OlliV"
],
"repo": "OlliV/micri",
"url": "https://github.com/OlliV/micri/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1095768827 | fixing hotreload issue
See https://github.com/facebook/create-react-app/issues/11771
The issue stems from this commit where we fixed a few security issues: https://github.com/OlympusDAO/olympus-frontend/commit/d9257f42a0bc0a4fae2dacfa8d5a35e8f76ce33c
Resolving react-error-overlay back to 6.0.9 until the issue above is resolved seems to be the workaround.
@brightiron Nice catch!
For future reference, here’s the suggested approach from that issue: https://github.com/facebook/create-react-app/issues/11771#issuecomment-999183535
| gharchive/pull-request | 2022-01-06T22:45:10 | 2025-04-01T04:32:54.557876 | {
"authors": [
"0xJem",
"brightiron"
],
"repo": "OlympusDAO/olympus-frontend",
"url": "https://github.com/OlympusDAO/olympus-frontend/pull/1094",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
567652064 | Fixed a bug where organizing usings clashed with other formatting settings
when organize usings was enabled, it prevented the regular formatting from working since it passed in the original document... 🙈
@JoeRobich after this is merged, let's release again so that you don't ship the buggy version in VS Code
thanks! 😀💪
| gharchive/pull-request | 2020-02-19T15:55:17 | 2025-04-01T04:32:54.564101 | {
"authors": [
"filipw"
],
"repo": "OmniSharp/omnisharp-roslyn",
"url": "https://github.com/OmniSharp/omnisharp-roslyn/pull/1715",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
905618850 | Failed to install omnisharp-roslyn server
vimrc has -(M1 Mbp, macOS 11.4)
Plug 'OmniSharp/omnisharp-vim' - to install using Plug
"OmniSharp
let g:OmniSharp_highlighting = 3
let g:OmniSharp_selector_ui = 'fzf' " Use fzf.vim
let g:OmniSharp_selector_findusages = 'fzf' " Use fzf.vim
" Use the stdio OmniSharp-roslyn server
let g:OmniSharp_server_stdio = 1
" Set the type lookup function to use the preview window instead of echoing it
let g:OmniSharp_typeLookupInPreview = 1
" Timeout in seconds to wait for a response from the server
let g:OmniSharp_timeout = 5
Currently our installer doesn't know which omnisharp-roslyn version to install for an M1, or how to detect that OS. I don't have one to test with so some help would be appreciated.
What are the outputs of these commands?
uname -s
uname -m
uname -o
uname -s
Darwin
uname -m
arm64
uname -o
uname: illegal option -- o
usage: uname [-amnprsv]
@nickspoons you script has no handling for arm64 yet -
uname -m
arm64
Let's see if that works for you, @bintr33
Yup that fixed the install issue. Thanks.
| gharchive/issue | 2021-05-28T15:34:54 | 2025-04-01T04:32:54.569911 | {
"authors": [
"bintr33",
"nickspoons"
],
"repo": "OmniSharp/omnisharp-vim",
"url": "https://github.com/OmniSharp/omnisharp-vim/issues/699",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
231050393 | Unable to download package - OmniSharp
Environment data
dotnet --info output:
.NET Command Line Tools (1.0.4)
Product Information:
Version: 1.0.4
Commit SHA-1 hash: af1e6684fd
Runtime Environment:
OS Name: Windows
OS Version: 6.1.7601
OS Platform: Windows
RID: win7-x64
Base Path: C:\Program Files\dotnet\sdk\1.0.4
VS Code version:
1.12.2
C# Extension version:
1.9.0
Steps to reproduce
Download OmniSharp extension
Expected behavior
Should download package correctly after any C# file opened in VSCode editor.
Actual behavior
Getting Following error:
Downloading package 'OmniSharp (.NET 4.6 / x64)' Failed at stage: downloadPackages
Error: unable to get local issuer certificate
After changing following setting, true to false it worked.
"http.proxyStrictSSL": false
Where? @avikenjale
| gharchive/issue | 2017-05-24T14:01:10 | 2025-04-01T04:32:54.574399 | {
"authors": [
"CodeSwimBikeRunner",
"avikenjale"
],
"repo": "OmniSharp/omnisharp-vscode",
"url": "https://github.com/OmniSharp/omnisharp-vscode/issues/1511",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
147945208 | Debugger cannot restore with newest builds of the CLI ('coreclr.ad7Engine.json does not exist')
C:\Users\jgabr>dotnet --info
.NET Command Line Tools (1.0.0-rc2-002370)
Product Information:
Version: 1.0.0-rc2-002370
Commit Sha: 482f36d26b
Runtime Environment:
OS Name: Windows
OS Version: 10.0.14316
OS Platform: Windows
RID: win10-x64
Error:
C:\Users\jgabr.vscode-insiders\extensions\ms-vscode.csharp-1.0.1-rc2\coreclr-debug\bin\Debug\netstandardapp1.5\win7-x64\dummy.pdb
publish: Renaming native host in output to create fully standalone output.
failed to resolve published host in: C:\Users\jgabr.vscode-insiders\extensions\ms-vscode.csharp-1.0.1-rc2\coreclr-debug\debugAdapters
publish: warning: host executable not available in dependencies, using host for current platform
publish: Published to C:\Users\jgabr.vscode-insiders\extensions\ms-vscode.csharp-1.0.1-rc2\coreclr-debug\debugAdapters
Published 1/1 projects successfully
C:\Users\jgabr.vscode-insiders\extensions\ms-vscode.csharp-1.0.1-rc2\coreclr-debug\debugAdapters\coreclr.ad7Engine.json does not exist.
Error: The .NET CLI did not correctly restore debugger files. Ensure that you have .NET CLI version
@DustinCampbell @caslan FYI, the extension is completely on the floor with newest CLIs.
The problem is that .NET CLI is failing to copy our contentFiles\any\any items to the publishing directory during 'dotnet publish'
NOTES
I know this is a problem on Windows. We will need to wait till tomorrow to confirm / deny if this exists on other platforms also.
You can replace 'latest' in the CLI build links to download an older version. For example, here is a link to an older build that I tried a while back -- https://dotnetcli.blob.core.windows.net/dotnet/beta/Installers/1.0.0-rc2-002330/dotnet-dev-win-x64.1.0.0-rc2-002330.exe
@pakrym Do you know of any recent changes that would have affected contentFiles for dotnet publish? We are relying on it for a scenario that is now broken.
I was just going to file this bug.
Happens on OSX as well.
https://github.com/dotnet/cli/commit/709f7b7d146ec218cc32e3522ed92af14e578831#diff-21237ad3a7decd8d9b62b36289e2760dL146
I ran into this as well last night on a clean WINX vm on the fast ring.
Is there a work around for this issue?
@ejsmith you can rollback to older build (1.0.0-rc2-002357 seem to work fine)
https://dotnetcli.blob.core.windows.net/dotnet/beta/Installers/1.0.0-rc2-002357/dotnet-dev-win-x64.1.0.0-rc2-002357.exe
I just opened CLI #2459 to track this. We're treating it as high priority and will get it fixed today. Please do open identified CLI issues in that repo to ensure they get onto the team's radar ASAP.
Here is a link to the CLI issue: https://github.com/dotnet/cli/issues/2459
I can confirm that the suggested 1.0.0-rc2-002357 build works for me.
@Kukkimonsuta Thanks for sharing the link, took me awhile to find the OS X version of 2357, for those looking:
https://dotnetcli.blob.core.windows.net/dotnet/beta/Installers/1.0.0-rc2-002357/dotnet-dev-osx-x64.1.0.0-rc2-002357.pkg
Also, unlike the Windows version there isn't an uninstaller for the .pkg fie so you have to manually remove it. I usually do the following:
sudo rm -rf /usr/local/share/dotnet/
pkgutil --pkgs | grep com.microsoft.dotnet | while read -r line; do sudo pkgutil --forget $line; done
Lastly, it appears as though as long as you launch the debugger installation on VS Code or VS Code Insiders after installing 2357 you'll have a working extension and you can then upgrade to the latest CLI without issue - I'm currently on 1.0.0-rc2-002374 with working debuggable editors.
This is fixed with the latest CLI. Get version >= 1.0.0-rc2-2392
/cc @gregg-miskelly closing this out
Can confirm this works for me on Mac 10.11.4 with dotnet --version 1.0.0-rc2-002392. I'm assuming this will also fix debugging on Windows as well 😃
Confirmed resolved my issue as well.
| gharchive/issue | 2016-04-13T04:45:55 | 2025-04-01T04:32:54.587362 | {
"authors": [
"Kukkimonsuta",
"NotMyself",
"chuckries",
"davidfowl",
"ejsmith",
"gregg-miskelly",
"jeffpapp",
"miguellira",
"piotrpMSFT",
"scottaddie"
],
"repo": "OmniSharp/omnisharp-vscode",
"url": "https://github.com/OmniSharp/omnisharp-vscode/issues/181",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
156966308 | Windows 7 debugging information is blank.
System information:
C:\Users\asahi>dotnet --info
.NET Command Line Tools (1.0.0-preview1-002702)
Product Information:
Version: 1.0.0-preview1-002702
Commit Sha: 6cde21225e
Runtime Environment:
OS Name: Windows
OS Version: 6.1.7601
OS Platform: Windows
RID: win7-x64
VS code version:
C# extension is latest:
.Net core debugger installed successfully when asp.net core rc2 project is loaded.
When debugging using visual studio code, here is what the picture looks like:
Observations:
Variables don't get populated
cannot add anything in watch
In command window typing any variable in context result in strage error (see at the bottom of screenshot above)
all debugger tooltips don't show anything
Breakpoint gets hit though.
In summary on windows 7 any .net core project debugging information is not available. However the same project's debugging information using all the same version of tools mentioned above, is available on windows 10.
Is this a known issue or I have missed something?
Thanks
This is #258 - yup, unfortunately expression evaluation is pretty much entirely broken at the moment. We are trying to get it fixed.
Thanks @gregg-miskelly looking forward to the fix :)
| gharchive/issue | 2016-05-26T11:51:49 | 2025-04-01T04:32:54.593660 | {
"authors": [
"asadsahi",
"gregg-miskelly"
],
"repo": "OmniSharp/omnisharp-vscode",
"url": "https://github.com/OmniSharp/omnisharp-vscode/issues/379",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1088049872 | Install Another Version... doesn't work for the C# extension only
Environment data
dotnet --info output:
.NET SDK (reflecting any global.json):
Version: 5.0.302
Commit: c005824e35
Runtime Environment:
OS Name: Windows
OS Version: 10.0.19042
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\5.0.302\
Host (useful for support):
Version: 5.0.8
Commit: 35964c9215
.NET SDKs installed:
2.2.402 [C:\Program Files\dotnet\sdk]
5.0.100 [C:\Program Files\dotnet\sdk]
5.0.302 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.All 2.1.28 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.28 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 5.0.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.28 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.17 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 5.0.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.17 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 5.0.8 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
VS Code version: 1.63.2
C# Extension version: 1.23.16
OmniSharp log
No log
Steps to reproduce
For weeks, i've noticed that among the all extensions i've installed, the C# one is the only has this issue. Go to the Extensions view. Click Install Another Version... on the menu for the C# extension. Observe the expected list of versions not showing, instead it's just hanging ... Not sure if this is VSCode team's responsibility or the C# extension team's.
Expected behavior
The list of versions show display without problem.
Actual behavior
Just hanging, not displaying the list of versions.
The list will display if you give it enough time. It would be best to open this issue against https://github.com/microsoft/vscode/issues.
| gharchive/issue | 2021-12-23T23:58:46 | 2025-04-01T04:32:54.598988 | {
"authors": [
"JoeRobich",
"cateyes99"
],
"repo": "OmniSharp/omnisharp-vscode",
"url": "https://github.com/OmniSharp/omnisharp-vscode/issues/4973",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
221125793 | Send events from the debugger to unit test code
This checkin completes the next chunk of work for unit test debugging in VS Code. The changes here are to create an event channel to receive events from the debugger.
Remaining work:
Fire new events into OmniSharp
Somehow build the project if it is out of date before running the test
@DustinCampbell please have a finial look at these changes when you get back from vacation.
Looks good. I'm working on the necessary OmniSharp changes right now.
| gharchive/pull-request | 2017-04-12T01:29:07 | 2025-04-01T04:32:54.600847 | {
"authors": [
"DustinCampbell",
"gregg-miskelly"
],
"repo": "OmniSharp/omnisharp-vscode",
"url": "https://github.com/OmniSharp/omnisharp-vscode/pull/1379",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2409974041 | [Snyk] Security upgrade setuptools from 40.5.0 to 70.0.0
This PR was automatically created by Snyk using the credentials of a real user.
Snyk has created this PR to fix 1 vulnerabilities in the pip dependencies of this project.
Snyk changed the following file(s):
sdk/requirements.dev.txt
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
Max score is 1000. Note that the real score may have changed since the PR was raised.
This PR was automatically created by Snyk using the credentials of a real user.
Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust project settings
📚 Read about Snyk's upgrade logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Improper Control of Generation of Code ('Code Injection')
[//]: # 'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"setuptools","from":"40.5.0","to":"70.0.0"}],"env":"prod","issuesToFix":[{"exploit_maturity":"Proof of Concept","id":"SNYK-PYTHON-SETUPTOOLS-7448482","priority_score":283,"priority_score_factors":[{"name":"confidentiality","value":"high"},{"name":"integrity","value":"high"},{"name":"availability","value":"high"},{"name":"scope","value":"unchanged"},{"name":"exploitCodeMaturity","value":"proofOfConcept"},{"name":"userInteraction","value":"required"},{"name":"privilegesRequired","value":"none"},{"name":"attackComplexity","value":"high"},{"name":"attackVector","value":"network"},{"name":"epss","value":0.01055},{"name":"isTrending","value":false},{"name":"publicationDate","value":"Mon Jul 15 2024 08:51:32 GMT+0000 (Coordinated Universal Time)"},{"name":"isReachable","value":false},{"name":"isTransitive","value":false},{"name":"isMalicious","value":false},{"name":"businessCriticality","value":"high"},{"name":"relativeImportance","value":"high"},{"name":"relativePopularityRank","value":99},{"name":"impact","value":9.79},{"name":"likelihood","value":2.89},{"name":"scoreVersion","value":"V5"}],"severity":"high","title":"Improper Control of Generation of Code ('Code Injection')"}],"prId":"94d76bc9-45c0-4a80-9ff3-8b43274e0805","prPublicId":"94d76bc9-45c0-4a80-9ff3-8b43274e0805","packageManager":"pip","priorityScoreList":[283],"projectPublicId":"f4127d15-f08c-4c26-84df-972298dfcffc","projectUrl":"https://app.snyk.io/org/omri-demo/project/f4127d15-f08c-4c26-84df-972298dfcffc?utm_source=github-enterprise&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","priorityScore"],"type":"auto","upgrade":[],"vulns":["SNYK-PYTHON-SETUPTOOLS-7448482"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'
🎉 Snyk hasn't found any issues so far.
✅ code/snyk check is completed. No issues were found. (View Details)
| gharchive/pull-request | 2024-07-16T01:27:35 | 2025-04-01T04:32:54.634542 | {
"authors": [
"Omrisnyk"
],
"repo": "Omrisnyk/turing",
"url": "https://github.com/Omrisnyk/turing/pull/66",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1103518629 | Short solution needed: "How to delete symbol" (vim)
Please help us write most modern and shortest code solution for this issue:
How to delete symbol (technology: vim)
Fast way
Just write the code solution in the comments.
Prefered way
Create pull request with a new code file inside inbox folder.
Don't forget to use comments to make solution explained.
Link to this issue in comments of pull request.
How to delete previous symbol
| gharchive/issue | 2022-01-14T11:43:39 | 2025-04-01T04:32:54.711237 | {
"authors": [
"nonunicorn"
],
"repo": "Onelinerhub/onelinerhub",
"url": "https://github.com/Onelinerhub/onelinerhub/issues/792",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1156901051 | Alternative paths for user friendly project organization (add project_path attribute to component element)
Combination of base_path, pack_path, project_path could provide user friendly view in form of flat folder structure in workspace.
Flat folder structure - structure without unnecessary nested folders without files.
David, thanks for the suggestion. Could you take the time to create an example, so that we better understand the request. It can be pretty simple.
Right now we show the Cclass as the identifier for a component group in the project window. I assume you would like to achieve something similar.
@DavidJurajdaNXP , base_path is not the one work for shape project tree view. To provide user friendly view, "project_path" is enough.
This seems to be related how IDE show software components (see here https://github.com/Open-CMSIS-Pack/Open-CMSIS-Pack-Spec/issues/93#issuecomment-1060349943)
Is this correct? If not, what is the underlying problem you try to solve?
I believe the requirement here is about that the physical directory structure, allowing to control the destination of files belonging to a component when copied into the project folder (see #93).
Yes, both #95 and #93 are actually consequence of NXP data model "project_path" attribute.
Let's limit scope of this ticket only to project_path.
project_path = data driven project explorer view
According to preliminary agreement in discussion #93 project_path will be introduced into standard, but actual usage will be tool specific.
Project_path must be applied to file paths and also to include paths in order to preserve data integrity.
In case of category="header" there should be way how to disable automatic include path generation.
<file category="header" name="ports/cmsis-driver/netif/ethernetif.h"/>
Project path is increasing complexity and in order to guarantee data integrity both layouts must be tested by pack vendor. It is price for this feature.
About the "Project View". I think it might be useful to have actually more views in IDE:
component based view - showing content of components
file based view - showing physical code organization in folders
I would call it views on different levels of abstraction. Another level might be "CMSIS layers".
Agreed with @DavidJurajdaNXP , there can be 2 views in IDE workspace project tree
component layout.
cclass, cgroup, csub can be used for this. With this information, developers can directly know the provenance and classification of the component(which pack it belongs to)
source tree layout
project_path can be used to contruct it.
It is an organization of the source under a component. Sometimes it could be untidy if sources are placed flat, like
We support this request.
For most of the packs we need: https://github.com/Open-CMSIS-Pack/Open-CMSIS-Pack-Spec/issues/106
But we will deliver our example projects as packs.
And in these packs, we want the files to be copied in the user section of the project.
It is a bit like attr=template but we want it:
for files that are not template but source and header
mandatory (copy by the tool)
flexible: indicate the path
Change Request:
add component attribute "project_path". The implementation is optional and tools specific and will not change the virtual view.
@DavidJurajdaNXP and @tcsunhao can you please validate this summary.
It is my understanding that this attribute is somewhat comparable to https://github.com/Open-CMSIS-Pack/Open-CMSIS-Pack-Spec/issues/30 however affecting all configuration files of a component, rather than being specified on a per file basis and it only affects the path but not the filename.
@jkrech , Thanks for approving this formally.
The project_path will affect all configuration files as well as other files when NXP tools use it to construct standalone project.
@jkrech , Thanks for approving this formally.
The project_path will affect all configuration files as well as other files when NXP tools use it to construct standalone project.
I understand that you want to fill 2 needs:
"simplify" the IDE view by applying project_path instead of the real file system path that can be longer
create a "standalone folder" with all the necessary files copied according to the project_path organization
But, the point which is unclear to me is: what about files going into RTE folder ?
Do you still put them in RTE folder ?
By the way, I assume this is an optional attribute.
@fred-r, I think you are misunderstanding the feature suggested by @tcsunhao .
The projectPath attribute can be used by tools to copy both the config files as well as the component files of a component into a physical subdirectory of the project directory as specified by projectPath. The config files are copied along the component files and not into any directory.
The expectation is that a "virtual view" will continue display the files related to a component, and not the physical location of files on disk.
Thanks for the explanation of @jkrech which is accurate and complete.
The projectpath is optional. Default value could be the real physical path of file.
Related files inside RTE folder, like startup_MK64F12.s(whose NXP project path is "startup"), in real folder, it is located in RTE\iar\Device\MK64FN1M0VLL12, in project explorer view, it can be showed listed inside "startup".
| gharchive/issue | 2022-03-02T09:33:06 | 2025-04-01T04:32:54.787820 | {
"authors": [
"DavidJurajdaNXP",
"ReinhardKeil",
"fred-r",
"jkrech",
"tcsunhao"
],
"repo": "Open-CMSIS-Pack/Open-CMSIS-Pack-Spec",
"url": "https://github.com/Open-CMSIS-Pack/Open-CMSIS-Pack-Spec/issues/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2108612697 | Consider linking FailureRecord with LossReport
LossReport has an enumeration including Failure and Inoperative. We could add a link to the FailureRecord for these events.
Perhaps the way to link this would be to Add the Issue object to EnergyLoss object
Resolved by deciding that OMIssue is primary and has both EnergyLoss and FailureRecords as members.
| gharchive/issue | 2024-01-30T19:35:17 | 2025-04-01T04:32:54.793898 | {
"authors": [
"cwhanse",
"jrippingale"
],
"repo": "Open-Orange-Button/Orange-Button-Taxonomy",
"url": "https://github.com/Open-Orange-Button/Orange-Button-Taxonomy/issues/288",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
53741354 | Is Open Addresses UK working with openaddresses.io? Also did logo inspiration come from @OpenPlans?
From Twitter at https://twitter.com/internetrebecca/status/547134344935251968 and moved here from https://github.com/OpenAddressesUK/theodolite/issues/52 .
Response in same twitter thread. Our logo was not inspired by @OpenPlans. We continue to try and contact openaddresses.io.
| gharchive/issue | 2015-01-08T11:36:21 | 2025-04-01T04:32:54.842913 | {
"authors": [
"giacecco",
"peterkwells"
],
"repo": "OpenAddressesUK/forum",
"url": "https://github.com/OpenAddressesUK/forum/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1209414696 | Add Host/HostInterface to C++
What
The mirror of Manager/ManagerInterface for the host representation.
ACs
Host class in the managerAPI C++ namespace.
HostInterface class in the hostAPI C++ namespace (incl. HostInterfacePtr).
Support for:
identifier
displayName
info
Issue for C implementation, with any design notes.
Python bindings as the base for existing Python Host/HostInterface classes
Existing test_host.py tests used for coverage
Additions for the handling of edge cases where invalid values are supplied to constructors/etc.
Out of scope
_interface
loggin/audit
Added C API issue as #423. Nothing jumped out as particularly onerous during development in #422.
| gharchive/issue | 2022-04-20T09:23:29 | 2025-04-01T04:32:54.854132 | {
"authors": [
"feltech",
"foundrytom"
],
"repo": "OpenAssetIO/OpenAssetIO",
"url": "https://github.com/OpenAssetIO/OpenAssetIO/issues/331",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1429593062 | USD AR2 interop
What
Produce a USD AR2 plugin that is an OpenAssetIO host, allowing OpenAssetIO Manager Plugins to be used for USD asset resolution etc, including Python based plugins.
Why
Tasks
[x] https://github.com/OpenAssetIO/usdOpenAssetIOResolver/issues/1
[x] OpenAssetIO/usdOpenAssetIOResolver#5
[x] OpenAssetIO/usdOpenAssetIOResolver#4
If you've not seen it, there's a USD AR2 resolver example that comes along with the USD install, I found the overview for it pretty illuminating. Link.
All done.
| gharchive/issue | 2022-10-31T10:13:58 | 2025-04-01T04:32:54.857113 | {
"authors": [
"elliotcmorris",
"foundrytom"
],
"repo": "OpenAssetIO/OpenAssetIO",
"url": "https://github.com/OpenAssetIO/OpenAssetIO/issues/685",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2064067715 | fix: feat: implement the command SAdd
修复sadd命令的错误。
修复前:
修复后:
#104
谢谢您帮我修改了错误。受您启发,我去查阅了最新标准,如图所示,这里的返回值是不是存在问题?
谢谢您帮我修改了错误。受您启发,我去查阅了最新标准,如图所示,这里的返回值是不是存在问题?
yes,我忽略了已经存在的元素,不应该直接返回size()
谢谢您帮我修改了错误。受您启发,我去查阅了最新标准,如图所示,这里的返回值是不是存在问题?
谢谢,我确实忽略了已经存在的元素,不应该直接返回size()
commit不要用中文
已经解决了。
此外,方便告知为什么commit不适用中文吗,感谢。
commit不要用中文
已经解决了。 此外,方便告知为什么commit不适用中文吗,感谢。
咱们自己的约束,没有特殊原因最好不要在注释和commit信息出现中文
| gharchive/pull-request | 2024-01-03T13:47:48 | 2025-04-01T04:32:54.863469 | {
"authors": [
"578223592",
"jettcc",
"whr1118"
],
"repo": "OpenAtomFoundation/pikiwidb",
"url": "https://github.com/OpenAtomFoundation/pikiwidb/pull/106",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1755971523 | time
i want to know when you inference, how long it takes to get an answer?
It takes 3 seconds on average for one prediction, which means it will take approximately 9 seconds to finish a query with two api calling step predictions and one final answer step prediction.
thansk for you reply,and my code is stucked at here
It seems that your model and the inputs are both loaded on CPU. Please check if they are loaded on CPU or GPU.
thanks you for your reply
| gharchive/issue | 2023-06-14T03:09:42 | 2025-04-01T04:32:54.892944 | {
"authors": [
"Tomsentable",
"pooruss"
],
"repo": "OpenBMB/ToolBench",
"url": "https://github.com/OpenBMB/ToolBench/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
193254703 | Fixes #141
This makes the Mac style window controls match the native experience more closely (fixes #141).
@jjeffryes there's one thing I wasn't able to figure out... On a native Mac window, when you hover any circle, all 3 icons display at the same time (as pictured below).
I wasn't able to figure out how to get this working in OB. If it's an easy change, it would be nice to layer it in.
I think something like this should get the icons working
.macStyleWindowControls:hover ~ .windowControls a span {
visibility: visible;
}
But I may be messing up the syntax in scss.
@morebrownies the code looks correct to me, and on my Mac VM only the button I'm hovering over changes.
Right, but on a native Mac window, when you hover any window control button, all 3 icons should display at the same time. That's what I'm trying to get working, but can't seem to figure out the correct scss for it.
Oh, I see. I thought you meant the other way around. I'll patch it.
@jjeffryes I'm not seeing any updates here. Are you sure you pushed your changes?
Ok @jjeffryes your PR looks good. I merged it into this PR.
| gharchive/pull-request | 2016-12-03T01:16:08 | 2025-04-01T04:32:54.910060 | {
"authors": [
"jjeffryes",
"morebrownies"
],
"repo": "OpenBazaar/openbazaar-desktop",
"url": "https://github.com/OpenBazaar/openbazaar-desktop/pull/147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
263170701 | Renaming the client cert file
It's been observed that if you have multiple certs with the same name, the one you don't want to be used may be used. To avoid this, I've renamed the cert file that the client must install to a more specific name.
Coverage remained the same at 27.334% when pulling 86df1de95fce24c64a88d0f1bb1a6be73cff3532 on ssl-doc into d9939aefbf9b8753b0c1b18ec7003120eb452882 on master.
| gharchive/pull-request | 2017-10-05T15:34:12 | 2025-04-01T04:32:54.911992 | {
"authors": [
"coveralls",
"rmisio"
],
"repo": "OpenBazaar/openbazaar-go",
"url": "https://github.com/OpenBazaar/openbazaar-go/pull/718",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1938173675 | Updated CONTRIBUTING.md
#21
The changes in this PR include the steps required to locally develop and test the website.
For the reference of before and after the changes I've made in the file:
I hope this serves the purpose. I'd be glad to make any changes you suggest :)
Please provide images that show the step-by-step process. This will simplify the understanding for those contributing for the first time.
Oh sure, will do!
hey! I've updated the file with new changes, is it alright now?
The changes in this PR include the steps required to locally develop and test the website.
Looks great!!
For the reference of before and after the changes I've made in the file:
I have a suggestion - You should consider including a screenshot of the output file (README.md).
Reason - changes to code are visible in Files Changed section to maintainer.
I hope this serves the purpose. I'd be glad to make any changes you suggest :)
Thank you for the PR.
| gharchive/pull-request | 2023-10-11T16:05:47 | 2025-04-01T04:32:54.978628 | {
"authors": [
"AdarshRawat1",
"Vani177"
],
"repo": "OpenCodeEra/Open-Code-Era-2.0",
"url": "https://github.com/OpenCodeEra/Open-Code-Era-2.0/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2703357992 | Plan and update misuse of the SFO term in EB code
This issue is imported from pivotal - Originaly created at Aug 13, 2019 by Michiel Kodde
## Background
In engine block, with the SFO epic. We introduced step up authentication in EB by using the SFO capabilities of StepUp Gateway. We however, in code name every step up action: SFO. This is the working name of this feature an was thus used, even tho it did not correctly express the actual concept.
Proposal
When interacting/configuring the StepUp gateway, we are talking about SFO, as the SFO feature of the StepUp gateway is used to achieve step up authentication in Engine.
Whenever we are dealing with the step up logic in Engine, which is the greatest part of the new implementation we should use the term step up in all file, class, method, parameter names and documentation.
Details
For now this is based on my personal preferences. Feel free to discuss, or correct me when I'm wrong.
Class/Method/Parameter names
All mentiones of Sfo should be renamed to Stepup
Unless they directly refer to SFO capabilities of a step-up authentication gateway.
Configuration
The sfo. prefix should be changed to stepup.
The gateway can be removed for the loa mapping config items.
The sso portion can be removed
Proposal:
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;; Step-up SETTINGS ;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; This PCRE regex is used to blacklist incoming AuthnContextClassRef attributes on
stepup.authn_context_class_ref_blacklist_regex = "/http:\/\/test\.openconext\.nl\/assurance\/loa[1-3]/"
stepup.loa.mapping[http://vm.openconext.org/assurance/loa2] = "https://gateway.example.org/authentication/loa2"
stepup.loa.mapping[http://vm.openconext.org/assurance/loa3] = "https://gateway.example.org/authentication/loa3"
stepup.loa.loa1 = "https://gateway.example.org/authentication/loa1"
stepup.sfo.gateway.entityId = "https://engine.vm.openconext.org/authentication/sfo/metadata"
stepup.sfo.gateway.ssoLocation = "https://engine.vm.openconext.org/functional-testing/gateway/second-factor-only/single-sign-on"
stepup.sfo.gateway.keyFile = "/opt/openconext/OpenConext-engineblock/ci/travis/files/engineblock.crt"
Behat
Rename to Stepup.feature
The Feature description now talks about 2FA, this should be renamed to step-up. Note the distinction between step-up authentication as terminology and StepUp as a product name.
Like I mentioned before, consider improving the sentence: we need to support gateway interactions.
Wherever SFO is mentioned, I think we are talking about step-up authentication.
Some of the scenarios, the ones starting with Sfo shouldneed to be rephrased in my opinion. I think stating that EngineBlock should be used as the actor here.
Normalize abbreviations used: like LoA/SP/IdP and so forth. Please update them to the standard used in other features. For example StepUp wiki uses LoA.
Example of a rewritten scenario:
Scenario: EngineBlock should pass step-up authentication if LoA level is not met, but allow-no-token is configured
Given the SP "SSO-SP" requires step-up authentication at LoA "http://test.openconext.nl/assurance/loa2"
And the SP "SSO-SP" allows step-up authentication without a token
When I log in at "SSO-SP"
And I select "SSO-IdP" on the WAYF
And I pass through EngineBlock
And I pass through the IdP
And I pass through EngineBlock
Then step-up authentication fails when the specified loa can not be given
And I give my consent
And I pass through EngineBlock
Then the url should match "/functional-testing/SSO-SP/acs"
Could the 'Given SFO is used' gerkin line be removed completely as it isn't used.
And also take a look at Gerkin syntax.
see for example:
https://github.com/OpenConext/OpenConext-engineblock/pull/729#discussion_r313248162 (bstrooband - Aug 13, 2019)
@thijskh what should happen with the endpoints?
Should they be renamed too?
/authentication/sfo/metadata
/authentication/sfo/consume-assertion
Maybe it would be more intuitive to use stepup instead? (bstrooband - Aug 19, 2019)
The engine 'home page' https://engine.test2.surfconext.nl still uses the term "SFO". (Thijs Kinkhorst - Aug 20, 2019)
This should be fixed (bstrooband - Aug 27, 2019)
| gharchive/issue | 2024-11-28T22:17:47 | 2025-04-01T04:32:54.993440 | {
"authors": [
"phavekes"
],
"repo": "OpenConext/OpenConext-engineblock",
"url": "https://github.com/OpenConext/OpenConext-engineblock/issues/1574",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
618452740 | Error on omitting Stepup-certificate (v5.13)
This may already been covered by the migration from /etc/openconext/engineblock.ini to YAML config file, but in that case this issue may be kept for future reference to other people bumping into this issue. It probably exists as of 5.13 where the new stepup-functionality was added.
Case:
I upgraded from 5.12 to 6.1.3
I didn't add the new Stepup-config, simply because I'm not using Stepup for this EB-instance.
My certificate is not in the default /etc/openconext/engineblock.crt location.
Composer install went fine, but on accessing EB from the browser in an authflow the following error occured:
May 14 20:16:25 sv1811042 EBLOG[11331]: [2020-05-14 20:16:25] app.DEBUG: Caught Exception "OpenConext\EngineBlock\Exception\InvalidArgumentException":"Keyfile '/etc/openconext/engineblock.crt' should be a valid file" {"session_id":"h7c9ljfn4q0k13erctj8ohsnt7","request_id":"5ebd8af9728c2"} []
May 14 20:16:25 sv1811042 EBLOG[11331]: [2020-05-14 20:16:25] engineblock.ERROR: Keyfile '/etc/openconext/engineblock.crt' should be a valid file | Caught Unhandled generic exception [..]
I'd argue that EB should be perfectly usable without any of the Stepup-stuff configured..
I eventually solved it by setting stepup.gateway.sfo.keyFile to an actual certificate.. The other stepup-settings are still left out and it's working fine now.
Perhaps changing the default for this setting to the value of encryption.keys.default.publicFile would be a solid fix?
Hi Tim, thanks for the report. I've created a PR with a fix for the described issue.
https://github.com/OpenConext/OpenConext-engineblock/pull/853
Are you able to verify if this will work for you? I'll try to do my best to let this land in the next patch version of 6.2.
No bueno.. If left out from the configuration it will try and load /etc/openconext/engineblock.crt.
It's still the same assertion failing, so that means it must be fed with some default value elsewhere.
Hi Tim, the validation is moved from the constructor completely. The validation now only happens JIT when an entity is configured to use the Stepup callout functionality. So if you don't configure it it won't get validated. This should help you and would suit the majority of the Openconext users.
https://github.com/OpenConext/OpenConext-engineblock/pull/853
I wanted to let you know this just landed in the latest release:
https://github.com/OpenConext/OpenConext-engineblock/releases/tag/6.2.1
Great, thanks @pablothedude !
Fixed in EB 6.2.1.
| gharchive/issue | 2020-05-14T18:38:06 | 2025-04-01T04:32:54.999881 | {
"authors": [
"pablothedude",
"thijskh",
"tvdijen"
],
"repo": "OpenConext/OpenConext-engineblock",
"url": "https://github.com/OpenConext/OpenConext-engineblock/issues/852",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
717109271 | Improve accuracy of case-C test
Previously the select raa functionality was tested, piggy backing on the use-raa feature. That was a bug fixed in Middleware.
See: https://www.pivotaltracker.com/story/show/175056146 for a full bug report and more backgrounds
See: https://github.com/OpenConext/Stepup-Middleware/pull/315 use this branch to run the behat tests
All other components should run on the latest develop
Were you able to run all tests in the dev-VM? The change seems indeed the reflect the situation, but I'm not able to run them all due to the migration of all components and the lack of time to upgrade solely for a review?
| gharchive/pull-request | 2020-10-08T07:48:10 | 2025-04-01T04:32:55.002251 | {
"authors": [
"MKodde",
"pablothedude"
],
"repo": "OpenConext/Stepup-Deploy",
"url": "https://github.com/OpenConext/Stepup-Deploy/pull/116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1650270592 | fix for #29, update nodejs
Also removes sonar-scanner. It's better to pull that is as a github action (SonarSource/sonarcloud-github-c-cpp@v1 ) so it stays up to date without re-publishing the container.
Also removes sonar-scanner. It's better to pull that is as a github action (SonarSource/sonarcloud-github-c-cpp@v1 ) so it stays up to date without re-publishing the container.
| gharchive/pull-request | 2023-04-01T03:25:24 | 2025-04-01T04:32:55.003532 | {
"authors": [
"thirtytwobits"
],
"repo": "OpenCyphal/docker_toolchains",
"url": "https://github.com/OpenCyphal/docker_toolchains/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
264513408 | diagrams for RTL release.
code reading notes for reference.
赞!能否在 res-list 中添加一小段说明,一句话说明一下?
除了partition是top的view之外,其他文件名即是模块名。
什么是res-list?
https://github.com/OpenDLA/OpenDLA/blob/master/resource-list.md
res-list 就是这个列表,可以理解是一个索引。
Thanks ^_^
| gharchive/pull-request | 2017-10-11T09:19:51 | 2025-04-01T04:32:55.005443 | {
"authors": [
"jszheng",
"lazyparser"
],
"repo": "OpenDLA/OpenDLA",
"url": "https://github.com/OpenDLA/OpenDLA/pull/9",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2271400480 | [Browsing]: Implement an existing web browser agent
What problem or use case are you trying to solve?
Currently we do not have a very powerful agent for browsing the web, but this could be rectified by implementing existing work on web browsing.
Do you have thoughts on the technical implementation?
Given that we are implementing web actions through browsergym (#1469), a natural choice of a web agent to implement would be BrowserGym's demo_agent which also gets good scores on benchmarks such as WebArena.
Describe alternatives you've considered
There are many other web browsing agents, so other strong browsing agents that are easy to implement could be considered.
Once we have a dedicated web browsing agent, we could also incorporate richer web browsing into our coding agents, which would improve the coding agents' abilities to do complex tasks.
Additional context
Blocked by #1469
This has been solved by BrowserAgent
| gharchive/issue | 2024-04-30T12:54:54 | 2025-04-01T04:32:55.017190 | {
"authors": [
"neubig"
],
"repo": "OpenDevin/OpenDevin",
"url": "https://github.com/OpenDevin/OpenDevin/issues/1470",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2304308533 | [Bug]: Ollama API openai.APIConnectionError: Connection error.
Is there an existing issue for the same bug?
[X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
[X] I have checked the existing issues.
Describe the bug
Request sent to Ollama though openai compatible API loads the model in ollama and then errors out in opendevin:
==============
STEP 0
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
20:55:15 - opendevin:ERROR: agent_controller.py:109 - Error while running the agent: OpenAIException - Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 113, in iter
for part in self._httpcore_stream:
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 367, in iter
raise exc from None
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 363, in iter
for part in self._stream:
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 349, in iter
raise exc
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 341, in iter
for chunk in self._connection._receive_response_body(**kwargs):
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body
event = self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 220, in _receive_event
with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
File "/usr/local/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/app/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.RemoteProtocolError: peer closed connection without sending complete message body (received 0 bytes, expected 407)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 952, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 928, in send
raise exc
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 922, in send
response.read()
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 813, in read
self._content = b"".join(self.iter_bytes())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 829, in iter_bytes
for raw_bytes in self.iter_raw():
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 883, in iter_raw
for raw_stream_bytes in self.stream:
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 126, in iter
for chunk in self._stream:
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 112, in iter
with map_httpcore_exceptions():
File "/usr/local/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (received 0 bytes, expected 407)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 533, in completion
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 492, in completion
response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 590, in create
return self._post(
^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1240, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 976, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1053, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 976, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1053, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 986, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
20:55:15 - opendevin:INFO: agent_controller.py:150 - Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR
Current Version
ghcr.io/opendevin/opendevin:main
Installation and Configuration
docker run \
-it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e LLM_MODEL="openai/codellama:7b" \
-e LLM_API_KEY="sk-XXX" \
-e LLM_BASE_URL="http://ollama.local:3000/ollama/v1" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
ghcr.io/opendevin/opendevin:main
Model and Agent
Does it with all models and agents I have tried.
Reproduction Steps
No response
Logs, Errors, Screenshots, and Additional Context
No response
Tried using the ollama/MODEL tag but that just seems to ignore my API key and get a 401 error since it doesn't authenticate, the API is closed and needs the key. The endpoints and API keys are working fine in other applications.
Have you tried to set the model in the UI, after you start?
A model name that should work is the name returned by ollama list.
Yep, tried it every way. It seems like it is loading the model into GPU and then just exiting or the connection closes for whatever reason. Nvidia-SMI shows it loads up and starts.
Run this to check whether LLM is working properly.
from litellm import completion
from datetime import datetime
config = {
'LLM_MODEL': 'gpt-4-turbo-2024-04-09',
'LLM_API_KEY': 'your-api-key',
'LLM_BASE_URL': None
}
messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'],
api_key=config['LLM_API_KEY'],
base_url=config['LLM_BASE_URL'],
messages=messages)
content = response.choices[0].message.content
print(content)
if '8' in content:
print('--> Correct answer! 🎉')
print('There are still 10 books in the room; reading them does not reduce the count. Consider exploring more accurate models for better results.')
dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")
| gharchive/issue | 2024-05-18T21:01:42 | 2025-04-01T04:32:55.041022 | {
"authors": [
"Mookins",
"SmartManoj",
"enyst"
],
"repo": "OpenDevin/OpenDevin",
"url": "https://github.com/OpenDevin/OpenDevin/issues/1893",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1972053055 | Could you please provide the script to generate town06_addition?
Thanks for your amazing work!
when I try to generate dataset, I found there is no script of town06_addition compared to your provided dataset.
could you please provide the script to generate town06_addition?
Hi, I have uploaded the missing xml file. Thank you for pointing this out. And we use tools/generate_random_routes.py to generate these routes.
Hi, I have uploaded the missing xml file. Thank you for pointing this out. And we use tools/generate_random_routes.py to generate these routes.
Thank you so much!
| gharchive/issue | 2023-11-01T09:56:08 | 2025-04-01T04:32:55.044687 | {
"authors": [
"penghao-wu",
"xanhug"
],
"repo": "OpenDriveLab/TCP",
"url": "https://github.com/OpenDriveLab/TCP/issues/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
743748731 | Custom maps
Map editor piece together map tiles to make new maps
Import maps from outside creations made out side game map editor
Basically more advanced map and model editing
REDRIVER2 will stay in it's state close to original game without dramatical changing the gameplay.
All this features you've been openning might be planned for actual OpenDriver2 project which is already planned
| gharchive/issue | 2020-11-16T11:12:57 | 2025-04-01T04:32:55.046263 | {
"authors": [
"SoapyMan",
"binarygeek119"
],
"repo": "OpenDriver2/REDRIVER2",
"url": "https://github.com/OpenDriver2/REDRIVER2/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1997110625 | Two questions about FAST
Q1. There are a total of seven turbines here, each separated by a distance of 520 metres. The wake trend of the first four turbines is reasonable, but the wind speed of the last three is gradually rising, and even if I reduce the spacing between the last three turbines, the wind speed will still rising, how is that? The kind of wind I use is the steady wind.
Q2. The wind speed file generated by TurbSim, is the wind speed emanating from one point in all directions, or is it from the flow along the x-axis?
Extremely grateful for your answer and look forward to your response!
Dear @Yongbyon,
Regarding (1), it is very difficult to interpret your results with only pictures of data columns. Can you share plots of the results to clarify your question. Regardless, it appears that you are running with steady, uniform inflow. Is that correct? Please note that FAST.Farm has not been calibrated for such a condition and may not produce accurate results, at least until the new wake-added turbulence model is added within https://github.com/OpenFAST/openfast/pull/1785.
Regarding (2), TurbSim generates a time-series of 3 components of the wind velocity (u.v,w) at each point in a 2D grid of points in the YZ plane (so called, "full-field" turbulence). Within OpenFAST and FAST.Farm, a 3D + time domain of inflow is generated by propagating the 2D planes along X axis based on the mean wind speed.
Best regards,
Dear @jjonkman
(1) When using turbulent wind in FAST.Farm, high-resolution and low-resolution grids need to be generated separately with TurbSim?
(2) If the above is true, which InflowFiles should each of the two output files (.bts) be placed in? Low-resolution output file to InflowFile in .fstf file? High-resolution output file into the InflowFile in .fst file?
Thanks,
| gharchive/issue | 2023-11-16T15:20:19 | 2025-04-01T04:32:55.087093 | {
"authors": [
"May0048",
"Yongbyon",
"jjonkman"
],
"repo": "OpenFAST/openfast",
"url": "https://github.com/OpenFAST/openfast/issues/1878",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
756084969 | Versioning
Hey,
I would like to lock the version of the fastp linux binary I am using when downloading it from http://opengene.org/fastp/fastp. Is it possible to include the respective version in the URI -
0.21.0 as of now - and if not, could you please include it? Another way could be to tag the newest version here in the repo - I think the last version tag is 0.20.1. Thanks a lot!
Martin
I second this request and would like to suggest that the binary is distributed via GitHub releases. So that users know exactly what version of fastp is present in the binary, and so that releases are stably tracked through GitHub.
When I build docker image for programs such as fastp, I prefer to download binaries from the GitHub release to aid in reproducibility.
I second this request and would like to suggest that the binary is distributed via GitHub releases. So that users know exactly what version of fastp is present in the binary, and so that releases are stably tracked through GitHub.
When I build docker image for programs such as fastp, I prefer to download binaries from the GitHub release to aid in reproducibility.
| gharchive/issue | 2020-12-03T10:50:30 | 2025-04-01T04:32:55.109944 | {
"authors": [
"kapsakcj",
"marrip"
],
"repo": "OpenGene/fastp",
"url": "https://github.com/OpenGene/fastp/issues/305",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
84641647 | For osmLayer, make tile.crossOrigin configurable
For osmLayer, this patch makes tile.crossOrigin configurable (default "anonymous, or "use-credentials") by passing a boolean 'useCredentials' argument. This is necessary for loading map tiles from WMS servers that require authentication.
Example:
var wms = map.createLayer('osm', {useCredentials: true});
thanks @mbertrand. How do you pass those credentials?
Credentials are read from cookies when tile.crossOrigin = 'use-credentials'; the cookies are not passed in the request header if tile.crossOrigin='anonymous'.
For example, if you go to http://geodata.epidemico.com/geojs, you will see an OSM basemap but not a WMS country boundaries layer it tries to load (because it requires authentication). Go to http://geodata.epidemico.com and log in with temporary account username 'geojs_user', password 'geojs' - then revisit the geojs page and you should see the WMS country boundaries layer appear. Without this patch, the WMS layer will not appear even after logging in.
Got it. That what's I thought but just wanted to confirm. I didn't know that cookies will be be passed by like that. Thanks for the explanation. LGTM :+1:
| gharchive/pull-request | 2015-06-03T16:08:17 | 2025-04-01T04:32:55.116481 | {
"authors": [
"aashish24",
"mbertrand"
],
"repo": "OpenGeoscience/geojs",
"url": "https://github.com/OpenGeoscience/geojs/pull/398",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2462992360 | Replace fontstack with something prettier and more multilingual
In #359, we introduced an “OpenHistorical” family of fontstacks for map labels that pulls in Open Sans for the most common Western scripts and Unifont for everything else. Neither font is particularly attractive. This combination also lacks support for many of the historical scripts that we’re starting to expose in the vector tiles as part of #679. Imagine being able to navigate to Mesopotamia in the Bronze Age and see the names of places in both your language and cuneiform. Stuff like that would be an eye-opener in historical GIS. Unfortunately, our fontstack doesn’t cover cuneiform codepoints, so OpenHistoricalMap/openhistoricalmap-embed#8 draws a blank.
OpenStreetMap Americana used to piggyback off OpenHistorical but quickly outgrew it and created an alternative fontstack and toolchain, fontstack66, based on the Noto project. The various fonts under the Noto umbrella add up to much wider Unicode coverage than Unifont, and they all look pretty attractive compared to Unifont. We could fork fontstack66 and supplement it with the extra Noto fonts we care about, then configure the site to publish that fork alongside the map-styles repository (from which we currently serve up font PBFs).
Any support for historical writing systems would require a change being discussed in maplibre/maplibre-gl-js#4550 to request glyph PBFs for codepages above U+FFFF.
We could fork fontstack66 and supplement it with the extra Noto fonts we care about
The default configuration already includes some non-BMP codepoint ranges, but we’d need to add some more fonts like Noto Anatolian Hieroglyphs and Noto Cuneiform that probably wouldn’t make a ton of sense for OSM Americana to serve from their domain. After adding the fonts to the configuration, we’d configure GitHub Pages to publish to openhistoricalmap.org similar to how the map-styles repository gets published. Apart from that, I don’t anticipate needing to manage a fork as part of the main deployment process.
| gharchive/issue | 2024-08-13T10:27:43 | 2025-04-01T04:32:55.129952 | {
"authors": [
"1ec5"
],
"repo": "OpenHistoricalMap/issues",
"url": "https://github.com/OpenHistoricalMap/issues/issues/870",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1327992208 | [BUG] oiiotool with -i:ch flag crashes on multipart exr
Describe the bug
Using oiiotool to read an EXR with a <z_channel> while using the flag -i:ch=R,G,B,A results in a crash. If changing the flag to -i <file> --ch R,G,B,A, it does not crash. If the file does not have the <z_channel>, it does not crash.
The original EXR was created from a 3delight render, with an extra Image Layer (AOV) "Z (depth)".
The crash is happening with OIIO 2.3.10, 2.1.9, 2.4.0dev (from Arnold). The command silently fails (no crash) with OIIO from Arnold 2.2.1, 2.1.4dev. The command works as expected with OIIO from Arnold 2.1.0dev. All my tests were done on Windows10.
To Reproduce
Steps to reproduce the behavior:
Call oiiotool.exe -i:ch=R,G,B,A "in.exr" -o "out.exr"
Crash:
1# OpenImageIO_v2_3::Sysutil::stacktrace in OpenImageIO_Util
2# OpenImageIO_v2_3::Sysutil::stacktrace in OpenImageIO_Util
3# seh_filter_exe in ucrtbase
4# 0x00007FF66322FC4A in oiiotool
5# _C_specific_handler in VCRUNTIME140
6# _chkstk in ntdll
7# RtlRaiseException in ntdll
8# KiUserExceptionDispatcher in ntdll
9# OpenImageIO_v2_3::ImageBuf::reset in OpenImageIO
10# OpenImageIO_v2_3::ImageBufAlgo::channels in OpenImageIO
11# 0x00007FF6631B4748 in oiiotool
12# 0x00007FF66320CCA7 in oiiotool
13# 0x00007FF6632032B9 in oiiotool
14# OpenImageIO_v2_3::ArgParse::parse_args in OpenImageIO_Util
15# OpenImageIO_v2_3::ArgParse::parse_args in OpenImageIO_Util
16# 0x00007FF6631FEF5D in oiiotool
17# 0x00007FF663216331 in oiiotool
18# 0x00007FF66322ADA0 in oiiotool
19# BaseThreadInitThunk in KERNEL32
20# RtlUserThreadStart in ntdll
0# OpenImageIO_v2_3::Sysutil::hardware_concurrency in OpenImageIO_Util
1# OpenImageIO_v2_3::Sysutil::stacktrace in OpenImageIO_Util
2# OpenImageIO_v2_3::Sysutil::stacktrace in OpenImageIO_Util
3# raise in ucrtbase
4# OpenImageIO_v2_3::Sysutil::stacktrace in OpenImageIO_Util
5# seh_filter_exe in ucrtbase
6# 0x00007FF66322FC4A in oiiotool
7# _C_specific_handler in VCRUNTIME140
8# _chkstk in ntdll
9# RtlRaiseException in ntdll
10# KiUserExceptionDispatcher in ntdll
11# OpenImageIO_v2_3::ImageBuf::reset in OpenImageIO
12# OpenImageIO_v2_3::ImageBufAlgo::channels in OpenImageIO
13# 0x00007FF6631B4748 in oiiotool
14# 0x00007FF66320CCA7 in oiiotool
15# 0x00007FF6632032B9 in oiiotool
16# OpenImageIO_v2_3::ArgParse::parse_args in OpenImageIO_Util
17# OpenImageIO_v2_3::ArgParse::parse_args in OpenImageIO_Util
18# 0x00007FF6631FEF5D in oiiotool
19# 0x00007FF663216331 in oiiotool
20# 0x00007FF66322ADA0 in oiiotool
21# BaseThreadInitThunk in KERNEL32
22# RtlUserThreadStart in ntdll
Expected behavior
A generated out.exr with RGBA layers only.
Evidence
Attached in.zip.
Platform information:
Input formats supported: bmp, cineon, dds, dpx, ffmpeg, fits, gif, hdr, iff, jpeg, null, openexr, png, pnm, psd, raw, rla, sgi, socket, softimage, targa, tiff, webp, zfile
Output formats supported: bmp, dpx, fits, gif, hdr, iff, jpeg, null, openexr, png, pnm, rla, sgi, socket, targa, term, tiff, webp, zfile
OpenColorIO 2.1.1, color config: built-in
Known color spaces: "linear", "default", "rgb", "RGB", "sRGB", "Rec709"
Filters available: box, triangle, gaussian, sharp-gaussian, catmull-rom, blackman-harris, sinc, lanczos3, radial-lanczos3, nuke-lanczos6, mitchell, bspline, disk, cubic, keys, simon, rifman
Dependent libraries: FFMpeg 4.4.1 (Lavf58.76.100), gif_lib 5.2.1, jpeg-turbo 2.1.3/jp62, null 1.0, OpenEXR 2.5.0, libpng 1.6.37, libraw 0.19.0-Beta1, LIBTIFF Version 4.3.0, Webp 1.2.2
OIIO 2.3.10 built for C++14/199711 sse2
Running on 8 cores 31.9GB sse2,sse3,ssse3,sse41,sse42,avx,avx2,fma,f16c,popcnt,rdrand
Windows 10 Pro 64
I can reproduce with this example, thanks. Looking into it, stay tuned...
Proposed fix in #3513
If you need a workaround, it should work to just use --ch separately from -i:
oiiotool in.exr --ch R,G,B,A -o out.exr
| gharchive/issue | 2022-08-04T02:44:31 | 2025-04-01T04:32:55.156767 | {
"authors": [
"lgritz",
"naniBox"
],
"repo": "OpenImageIO/oiio",
"url": "https://github.com/OpenImageIO/oiio/issues/3509",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
763443715 | Removed some duplicates in maketx.rst doc
Removed some duplicates in documentation
Strange! Not quite sure how that happened.
Good catch, thanks for the fix.
How can we make the same behaviour of maketx --constant-color-detect with oiiotool? -otex:constant_color_detect=1?
Yes, I think that works, though now I see that it's not in the documentation. I'll add it.
This also works, and is equivalent:
oiiotool in.tif -attrib "maketx:constant_color_detect" 1 -otex out.tx
| gharchive/pull-request | 2020-12-12T08:19:56 | 2025-04-01T04:32:55.158803 | {
"authors": [
"Xelt",
"lgritz"
],
"repo": "OpenImageIO/oiio",
"url": "https://github.com/OpenImageIO/oiio/pull/2785",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
495328789 | handle invalid dependencies
fix for #595
Adding a dependency that does not exist or cannot be downloaded from Maven Central or any configured repositories will trigger the following error:
Unable to resolve artifact: io.openliberty.features:abcdefgh:1.0
Added dependencyChange helper method so that an invalid dependency change does not also trigger anUnhandled change detected in pom.xml. Restart liberty:dev mode for it to take effect. message.
| gharchive/pull-request | 2019-09-18T16:21:47 | 2025-04-01T04:32:55.256193 | {
"authors": [
"kathrynkodama"
],
"repo": "OpenLiberty/ci.maven",
"url": "https://github.com/OpenLiberty/ci.maven/pull/604",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
991317520 | [SPARK] [INTEGRATION] Missing dependency in using OpenLineageSparkListener
Describe the bug
Missing dependency error when following example in integration/spark/README.md
To Reproduce
Run the example as in the README (jupyter example)
Fix
Add the missing dependency io.openlineage:openlineage-java:0.2.1
.config('spark.jars.packages', 'io.openlineage:openlineage-java:0.2.1')
Trace
Ivy Default Cache set to: /home/jovyan/.ivy2/cache
The jars for the packages stored in: /home/jovyan/.ivy2/jars
io.openlineage#openlineage-spark added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-24957b2b-8ca2-4c45-9f8d-1508e4b14255;1.0
confs: [default]
found io.openlineage#openlineage-spark;0.2.1 in central
found org.javassist#javassist;3.27.0-GA in central
found com.github.ok2c.hc5#hc5-async-json;0.2.1 in central
found org.apache.httpcomponents.core5#httpcore5;5.0 in central
found org.apache.httpcomponents.client5#httpclient5;5.0.3 in central
found org.apache.httpcomponents.core5#httpcore5;5.0.2 in central
found org.apache.httpcomponents.core5#httpcore5-h2;5.0.2 in central
found org.slf4j#slf4j-api;1.7.25 in central
found commons-codec#commons-codec;1.13 in central
found com.fasterxml.jackson.core#jackson-databind;2.12.2 in central
found com.fasterxml.jackson.core#jackson-annotations;2.12.2 in central
found com.fasterxml.jackson.core#jackson-core;2.12.2 in central
found com.fasterxml.jackson.datatype#jackson-datatype-jsr310;2.12.2 in central
found com.fasterxml.jackson.module#jackson-module-scala_2.11;2.12.2 in central
found com.thoughtworks.paranamer#paranamer;2.8 in central
:: resolution report :: resolve 3604ms :: artifacts dl 27ms
:: modules in use:
com.fasterxml.jackson.core#jackson-annotations;2.12.2 from central in [default]
com.fasterxml.jackson.core#jackson-core;2.12.2 from central in [default]
com.fasterxml.jackson.core#jackson-databind;2.12.2 from central in [default]
com.fasterxml.jackson.datatype#jackson-datatype-jsr310;2.12.2 from central in [default]
com.fasterxml.jackson.module#jackson-module-scala_2.11;2.12.2 from central in [default]
com.github.ok2c.hc5#hc5-async-json;0.2.1 from central in [default]
com.thoughtworks.paranamer#paranamer;2.8 from central in [default]
commons-codec#commons-codec;1.13 from central in [default]
io.openlineage#openlineage-spark;0.2.1 from central in [default]
org.apache.httpcomponents.client5#httpclient5;5.0.3 from central in [default]
org.apache.httpcomponents.core5#httpcore5;5.0.2 from central in [default]
org.apache.httpcomponents.core5#httpcore5-h2;5.0.2 from central in [default]
org.javassist#javassist;3.27.0-GA from central in [default]
org.slf4j#slf4j-api;1.7.25 from central in [default]
:: evicted modules:
org.apache.httpcomponents.core5#httpcore5;5.0 by [org.apache.httpcomponents.core5#httpcore5;5.0.2] in [default]
com.fasterxml.jackson.core#jackson-databind;2.9.6 by [com.fasterxml.jackson.core#jackson-databind;2.12.2] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 17 | 0 | 0 | 2 || 14 | 0 |
---------------------------------------------------------------------
:: problems summary ::
:::: WARNINGS
module not found: io.openlineage#openlineage-java;0.0.1-SNAPSHOT
==== local-m2-cache: tried
file:/home/jovyan/.m2/repository/io/openlineage/openlineage-java/0.0.1-SNAPSHOT/openlineage-java-0.0.1-SNAPSHOT.pom
-- artifact io.openlineage#openlineage-java;0.0.1-SNAPSHOT!openlineage-java.jar:
file:/home/jovyan/.m2/repository/io/openlineage/openlineage-java/0.0.1-SNAPSHOT/openlineage-java-0.0.1-SNAPSHOT.jar
==== local-ivy-cache: tried
/home/jovyan/.ivy2/local/io.openlineage/openlineage-java/0.0.1-SNAPSHOT/ivys/ivy.xml
-- artifact io.openlineage#openlineage-java;0.0.1-SNAPSHOT!openlineage-java.jar:
/home/jovyan/.ivy2/local/io.openlineage/openlineage-java/0.0.1-SNAPSHOT/jars/openlineage-java.jar
==== central: tried
https://repo1.maven.org/maven2/io/openlineage/openlineage-java/0.0.1-SNAPSHOT/openlineage-java-0.0.1-SNAPSHOT.pom
-- artifact io.openlineage#openlineage-java;0.0.1-SNAPSHOT!openlineage-java.jar:
https://repo1.maven.org/maven2/io/openlineage/openlineage-java/0.0.1-SNAPSHOT/openlineage-java-0.0.1-SNAPSHOT.jar
==== spark-packages: tried
https://repos.spark-packages.org/io/openlineage/openlineage-java/0.0.1-SNAPSHOT/openlineage-java-0.0.1-SNAPSHOT.pom
-- artifact io.openlineage#openlineage-java;0.0.1-SNAPSHOT!openlineage-java.jar:
https://repos.spark-packages.org/io/openlineage/openlineage-java/0.0.1-SNAPSHOT/openlineage-java-0.0.1-SNAPSHOT.jar
::::::::::::::::::::::::::::::::::::::::::::::
:: UNRESOLVED DEPENDENCIES ::
::::::::::::::::::::::::::::::::::::::::::::::
:: io.openlineage#openlineage-java;0.0.1-SNAPSHOT: not found
::::::::::::::::::::::::::::::::::::::::::::::
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: io.openlineage#openlineage-java;0.0.1-SNAPSHOT: not found]
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1429)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:308)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
@Sbargaoui: Thanks for giving the spark integration a try! Would you like to update the spark docs to add the OpenLineageSparkListener?
This was just fixed- https://github.com/OpenLineage/OpenLineage/commit/21d1f0fbb2360d9e24e7c1eed0ea7c42cfffe73e#diff-2d71abf5c8056da373d83fdea76243c73343df1936fafe4cfa5bc6adb620c2af
| gharchive/issue | 2021-09-08T16:23:49 | 2025-04-01T04:32:55.349225 | {
"authors": [
"Sbargaoui",
"collado-mike",
"wslulciuc"
],
"repo": "OpenLineage/OpenLineage",
"url": "https://github.com/OpenLineage/OpenLineage/issues/259",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.