instruction
stringlengths
0
30k
I have the following query in mysql, ``` SELECT COALESCE(sum(COALESCE(amount, 0)), 0) as sum_amount, COALESCE(sum(COALESCE(fees_amount, 0)), 0) as sum_fees_amount FROM incoming_operation WHERE status = 'CLEARED' and merchant_id = ? and shop_id IN (?) and currency = ? and cycle_id is null FOR UPDATE ``` where `shop_id,currency,status` and `cycle_id` are indexed `CREATE INDEX payout_operation_update ON incoming_operation(shop_id,currency,status);` `CREATE INDEX cycle_id_index ON incoming_operation (cycle_id);` the value of cycle_id is mostly NULL in the DB. the values of `shop_id,currency,status` are unique in the DB when running the above query, the query ran for a very long time (2 hours) and I got a lock where other operations couldnt update any row in the DB. from running the query with explain ive seen that it used both indexes: `shop_id,currency,status` and `cycle_id`. can i assume that using `payout_operation_update(shop_id,currency,status)` index only will make the query quicker? my assumption is that because of using `cycle_id_index`, all other rows (whos cycle_id is null) were also locked, and using only `payout_operation_update(shop_id,currency,status)` index will lock only the relevant rows (those that apply to filter of `shop_id,currency,status`) is that assumption valid?
My code print FAIL as follow: Need someone to explain the difference. I want to know the difference between __asm__ __volatile__( "addl %1,%0;" :"=r"(sum) :"r"(add1),"r"(add2) ); and __asm__ ( "addl %%ebx, %%eax;" : "=a" (sum) : "a" (add1), "b" (add2) ); My source code: int sum = 0; int add1 = 100; int add2 = 200; #ifdef __x86_64__ __asm__ __volatile__( "addl %1,%0;" :"=r"(sum) :"r"(add1),"r"(add2) ); // __asm__ ( "addl %%ebx, %%eax;" // : "=a" (sum) // : "a" (add1), "b" (add2) ); #else sum = add1 + add2; #endif
the difference between two style of inline ASM
|c|inline-assembly|
This example is not a good example, because `java.lang.NullPointerException` could be just anything - and therefore one can't even tell by such example, what the culprit or remedy to it would by. Usually one has to check for files not being checked in into version-control, if they're indeed present, which already catches most of the portability issues. For example, this would conditionally load a file - or log a proper warning: ``` File keystoreConfig = rootProject.file('keystore.properties'); if (keystoreConfig.exists()) { def keystore = new Properties() def is = new FileInputStream(keystoreConfig) keystore.load(is) // some values are being assigned in here. is.close() } else { println "file missing: ${keystoreConfig}" } ``` The JRE doesn't matter, one should better check if the JDK has the expected version. Generally speaking, one has to make sure that the build environment is "sane".
I am using JavaFX for the first time for my homework assignments. So far, I have had around 5 homework questions that use JavaFX and only one has worked. The others all just show a white screen with 2 buttons, one saying Switch to Primary View and one saying Switch to Secondary View. I have no idea what is causing this or how to fix it. Are there any resources I can use to learn more about what is causing this issue, or any help someone can offer? I am using jdk 21. My code was giving me an error about Stage requiring transitive and some other guy on here asked about that so I did what the comment said and the error is gone but it didn't change the output.
JavaFX build generating a blank gui with primary view and secondary view buttons
|maven|javafx|
null
In my code @telebot.TeleBot.callback_query_handler(func=lambda callback: True ) def answer(callback): I get error: TypeError: TeleBot.callback_query_handler() missing 1 required positional argument: 'self' Python 3.10 telebot 0.0.5 What can I do to fix this problem? I've write my first bot for telegram and I need buttons I saw on youtube this code working @client.callback_query_handlers(func=lambda call: True), but this code get me another error: TypeError: 'list' object is not callable And I can't understand whow can I use callback_query_handlers in Python 3.10 and up.
Whow to use callback_query_handler in Python 3.10
|linux|
null
`math.gcd()` is certainly a Python shim over a library function that is running as machine code (i.e. compiled from "C" code), not a function being run by the Python interpreter. See also: [Where are math.py and sys.py?](https://stackoverflow.com/questions/18857355/where-are-math-py-and-sys-py) Update: This should be it `math_gcd(PyObject *module, PyObject * const *args, Py_ssize_t nargs)` in [`mathmodule.c`](https://github.com/python/cpython/blob/main/Modules/mathmodule.c) and it calls `_PyLong_GCD(PyObject *aarg, PyObject *barg)` in [`longobject.c`](https://github.com/python/cpython/blob/main/Objects/longobject.c) which apparently uses [Lehmer's GCD algorithm](https://en.wikipedia.org/wiki/Lehmer's_GCD_algorithm) The code is smothered in housekeeping operations though. ~~~ lang-C PyObject * _PyLong_GCD(PyObject *aarg, PyObject *barg) { PyLongObject *a, *b, *c = NULL, *d = NULL, *r; stwodigits x, y, q, s, t, c_carry, d_carry; stwodigits A, B, C, D, T; int nbits, k; digit *a_digit, *b_digit, *c_digit, *d_digit, *a_end, *b_end; a = (PyLongObject *)aarg; b = (PyLongObject *)barg; if (_PyLong_DigitCount(a) <= 2 && _PyLong_DigitCount(b) <= 2) { Py_INCREF(a); Py_INCREF(b); goto simple; } /* Initial reduction: make sure that 0 <= b <= a. */ a = (PyLongObject *)long_abs(a); if (a == NULL) return NULL; b = (PyLongObject *)long_abs(b); if (b == NULL) { Py_DECREF(a); return NULL; } if (long_compare(a, b) < 0) { r = a; a = b; b = r; } /* We now own references to a and b */ Py_ssize_t size_a, size_b, alloc_a, alloc_b; alloc_a = _PyLong_DigitCount(a); alloc_b = _PyLong_DigitCount(b); /* reduce until a fits into 2 digits */ while ((size_a = _PyLong_DigitCount(a)) > 2) { nbits = bit_length_digit(a->long_value.ob_digit[size_a-1]); /* extract top 2*PyLong_SHIFT bits of a into x, along with corresponding bits of b into y */ size_b = _PyLong_DigitCount(b); assert(size_b <= size_a); if (size_b == 0) { if (size_a < alloc_a) { r = (PyLongObject *)_PyLong_Copy(a); Py_DECREF(a); } else r = a; Py_DECREF(b); Py_XDECREF(c); Py_XDECREF(d); return (PyObject *)r; } x = (((twodigits)a->long_value.ob_digit[size_a-1] << (2*PyLong_SHIFT-nbits)) | ((twodigits)a->long_value.ob_digit[size_a-2] << (PyLong_SHIFT-nbits)) | (a->long_value.ob_digit[size_a-3] >> nbits)); y = ((size_b >= size_a - 2 ? b->long_value.ob_digit[size_a-3] >> nbits : 0) | (size_b >= size_a - 1 ? (twodigits)b->long_value.ob_digit[size_a-2] << (PyLong_SHIFT-nbits) : 0) | (size_b >= size_a ? (twodigits)b->long_value.ob_digit[size_a-1] << (2*PyLong_SHIFT-nbits) : 0)); /* inner loop of Lehmer's algorithm; A, B, C, D never grow larger than PyLong_MASK during the algorithm. */ A = 1; B = 0; C = 0; D = 1; for (k=0;; k++) { if (y-C == 0) break; q = (x+(A-1))/(y-C); s = B+q*D; t = x-q*y; if (s > t) break; x = y; y = t; t = A+q*C; A = D; B = C; C = s; D = t; } if (k == 0) { /* no progress; do a Euclidean step */ if (l_mod(a, b, &r) < 0) goto error; Py_SETREF(a, b); b = r; alloc_a = alloc_b; alloc_b = _PyLong_DigitCount(b); continue; } /* a, b = A*b-B*a, D*a-C*b if k is odd a, b = A*a-B*b, D*b-C*a if k is even */ if (k&1) { T = -A; A = -B; B = T; T = -C; C = -D; D = T; } if (c != NULL) { assert(size_a >= 0); _PyLong_SetSignAndDigitCount(c, 1, size_a); } else if (Py_REFCNT(a) == 1) { c = (PyLongObject*)Py_NewRef(a); } else { alloc_a = size_a; c = _PyLong_New(size_a); if (c == NULL) goto error; } if (d != NULL) { assert(size_a >= 0); _PyLong_SetSignAndDigitCount(d, 1, size_a); } else if (Py_REFCNT(b) == 1 && size_a <= alloc_b) { d = (PyLongObject*)Py_NewRef(b); assert(size_a >= 0); _PyLong_SetSignAndDigitCount(d, 1, size_a); } else { alloc_b = size_a; d = _PyLong_New(size_a); if (d == NULL) goto error; } a_end = a->long_value.ob_digit + size_a; b_end = b->long_value.ob_digit + size_b; /* compute new a and new b in parallel */ a_digit = a->long_value.ob_digit; b_digit = b->long_value.ob_digit; c_digit = c->long_value.ob_digit; d_digit = d->long_value.ob_digit; c_carry = 0; d_carry = 0; while (b_digit < b_end) { c_carry += (A * *a_digit) - (B * *b_digit); d_carry += (D * *b_digit++) - (C * *a_digit++); *c_digit++ = (digit)(c_carry & PyLong_MASK); *d_digit++ = (digit)(d_carry & PyLong_MASK); c_carry >>= PyLong_SHIFT; d_carry >>= PyLong_SHIFT; } while (a_digit < a_end) { c_carry += A * *a_digit; d_carry -= C * *a_digit++; *c_digit++ = (digit)(c_carry & PyLong_MASK); *d_digit++ = (digit)(d_carry & PyLong_MASK); c_carry >>= PyLong_SHIFT; d_carry >>= PyLong_SHIFT; } assert(c_carry == 0); assert(d_carry == 0); Py_INCREF(c); Py_INCREF(d); Py_DECREF(a); Py_DECREF(b); a = long_normalize(c); b = long_normalize(d); } Py_XDECREF(c); Py_XDECREF(d); simple: assert(Py_REFCNT(a) > 0); assert(Py_REFCNT(b) > 0); /* Issue #24999: use two shifts instead of ">> 2*PyLong_SHIFT" to avoid undefined behaviour when LONG_MAX type is smaller than 60 bits */ #if LONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT /* a fits into a long, so b must too */ x = PyLong_AsLong((PyObject *)a); y = PyLong_AsLong((PyObject *)b); #elif LLONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT x = PyLong_AsLongLong((PyObject *)a); y = PyLong_AsLongLong((PyObject *)b); #else # error "_PyLong_GCD" #endif x = Py_ABS(x); y = Py_ABS(y); Py_DECREF(a); Py_DECREF(b); /* usual Euclidean algorithm for longs */ while (y != 0) { t = y; y = x % y; x = t; } #if LONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT return PyLong_FromLong(x); #elif LLONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT return PyLong_FromLongLong(x); #else # error "_PyLong_GCD" #endif error: Py_DECREF(a); Py_DECREF(b); Py_XDECREF(c); Py_XDECREF(d); return NULL; } ~~~
For my scenario, I wound up disabling `create_before_destroy` (changing from true to false). Otherwise, I could not figure out where to apply the other solutions and terraform state did not show `deposed` nor `tainted` anywhere.
Is there a penalty to calling `with_column` many times in polars. Does it lead to dataframe "fragmenting"? EDIT: I don't mean to distract with the term "fragmenting". My real question is, is there any performance penalty to calling `with_columns` many times instead of `with_columns` with many columns?
I'm trying to understand the distinction between declarations and definitions in C++. I've read various answers here and consulted the C++ Standard, but I'm still confused about a few points: 1. Some explanations suggest that definitions allocate memory to the variable, implying that if memory isn't allocated, it's considered a declaration. Is this always the case? 2. Which phase of the compilation process—Compiler or Linker—is primarily associated with declarations and definitions? Here's a summary of my understanding so far: **Declaration**: This informs the compiler about the existence of a variable or function, including its name and type, but doesn't specify memory allocation. **Definition**: This not only declares the existence of a variable or function but also provides the implementation and how much memory a variable needs but it doesn't allocate memory unless it is used. Given the following code snippet similar to what we've discussed in lectures: ```cpp class Point { // declaration public: int x; // declaration int y; // declaration void setPoint(int x, int y); // declaration }; int main() { Point p1; // definition }; ``` My understanding is that everything in this code snippet should be considered a definition except for `void setPoint(int x, int y);`. However, I'm unsure about `int x;` because it seems that memory isn't allocated until an object is created. For instance, [my IDE (Clion) considers int x; a declaration](https://i.stack.imgur.com/57K7L.png), which adds to my confusion. Could someone please clarify how definitions are precisely associated with memory allocation? Additionally, I'd appreciate insights into the specific cases where memory is or isn't allocated for variables declared in classes or function bodies that haven't been invoked.
It's not in the package itself, it's in the lib. Should be: `import { AWSIoTProvider } from '@aws-amplify/pubsub/lib/Providers'` I think the working statement (just `import { AWSIoTProvider } from '@aws-amplify/pubsub'` vs `import { AWSIoTProvider } from '@aws-amplify/pubsub/lib/Providers'` changes depening on the aws-amplify package version (I guess the structure was changed), but I am not sure which exact version is the cutoff.
I been trying to fix my code for it to give me the right number of nodes within the netlist and it keeps getting one less node than what it actually is, this is the part of my code that reads and analyze the info from the file. I am going insane at this point ); ( I already asked ai to help me fix this issue and nothing) __Example:__ netlist: ```none V1 1 0 5 R1 1 2 10 R2 2 0 2 ``` output desired for matrix M: ```none 1 0 0 0 1 0 0 0 1 ``` output with my code : ```none 1 0 0 1 ``` ``` // Define a struct to represent components in the netlist struct Component { string label; int source_node; int destination_node; double value; }; // Function to parse a line from the netlist file into a Component struct Component parseComponent(const string& line) { Component comp; stringstream ss(line); ss >> comp.label >> comp.source_node >> comp.destination_node >> comp.value; return comp; } // Function to perform circuit analysis and write results to output file void analyzeCircuit(const vector<Component>& components) { // Find the number of nodes int numNodes = 0; for (const auto& comp : components) { numNodes = max(numNodes, max(comp.source_node, comp.destination_node)); } .... int main() { vector<Component> components; // Read netlist from file ifstream inputFile("netlist.txt"); if (inputFile.is_open()) { string line; while (getline(inputFile, line)) { // Parse each line of the netlist into a Component struct components.push_back(parseComponent(line)); } inputFile.close(); // Perform circuit analysis analyzeCircuit(components); } else { cout << "Unable to open netlist.txt file." << endl; } return 0; } ```
I've just installed C compilers and everything looks normal, however, when I try to run a code it takes too long. A simple ''hello word'' is taking 12 seconds. An online compiler is doing the same thing in just 2 seconds. I don't know what may be happening[the code](https://i.stack.imgur.com/bWw8J.png) [the hello word output](https://i.stack.imgur.com/9DhQb.png) is this normal? Or am I doing something wrong? The only extensions I have are code runner and C/C++ I tried to ignore every other extension or feature, but it continues this way. I can assure my PC is good at least, so I don't think it is the one at fault here. But I don't know since now, thank you for your help
I need help to understand the time wich my simple ''hello world'' is taking to execute
|c|performance|time|
null
In my case I got this error on Windows 10 and NordVPN caused the problem. I realize this is probably not the OP's problem but could help others who come across this question. Besides this docker error, I also would get the following error attempting to start a WSL command line: `An operation was attempted on something that is not a socket`. When I turned off my Nord VPN (app still running, just disconnected from VPN) the docker error went away as did the WSL error.
null
The data structure for this is a [Trie][1] (Prefix Tree): ```php <?php class TrieNode { public $childNode = []; // Associative array to store child nodes public $endOfString = false; // Flag to indicate end of a string } class Trie { private $root; public function __construct() { $this->root = new TrieNode(); } public function insert($string) { if (!empty($string)) { $this->insertRecursive($this->root, $string); } } private function insertRecursive(&$node, $string) { if (empty($string)) { $node->endOfString = true; return; } $firstChar = $string[0]; $remainingString = substr($string, 1); if (!isset($node->childNode[$firstChar])) { $node->childNode[$firstChar] = new TrieNode(); } $this->insertRecursive($node->childNode[$firstChar], $remainingString); } public function commonPrefix() { $commonPrefix = ''; $this->commonPrefixRecursive($this->root, $commonPrefix); return $commonPrefix; } private function commonPrefixRecursive($node, &$commonPrefix) { if (count($node->childNode) !== 1 || $node->endOfString) { return; } $firstChar = array_key_first($node->childNode); $commonPrefix .= $firstChar; $this->commonPrefixRecursive($node->childNode[$firstChar], $commonPrefix); } } // Example usage $trie = new Trie(); $trie->insert("/home/texai/www/app/application/cron/logCron.log"); $trie->insert("/home/texai/www/app/application/jobs/logCron.log"); $trie->insert("/home/texai/www/app/var/log/application.log"); $trie->insert("/home/texai/www/app/public/imagick.log"); $trie->insert("/home/texai/www/app/public/status.log"); echo "Common prefix: " . $trie->commonPrefix() . PHP_EOL; ?> ``` Output: Common prefix: /home/texai/www/app/ [Demo][2] [1]: https://en.wikipedia.org/wiki/Trie [2]: https://onecompiler.com/php/428ve5e2h
Is there a way in vscode to after I type Ctrl + . and accept a correction with quick fix, to automatically run the key combo Alt+F8 to move to the next problem without having to press it yourself? I use a spelling extension and it to correct a word by selecting Ctrl+ . then selecting the correct suggestion. However after this selection I want it to automatically move to the next item in the problem list. Currently it doesn't do this but I have to Press Alt+F8 which is the default to open the next problem. Then I click Ctrl + . again to select which correction applies. I am not sure why vscode doesn't have option to move automatically to the next problem after fixing the first. I looked at the settings and there isn't such an option. Visual: 1. Have file with problems and Press Alt+F8 to navigate to first problem [![enter image description here][1]][1] 2. Press Ctrl+ . and select a suggested correction from the list with mouse click or enter. [![enter image description here][2]][2] 3. After correction with Ctrl+. the cursor stays on the corrected word and one has to manually click Alt+F8 again to to go to the next problem. [![enter image description here][3]][3] [1]: https://i.stack.imgur.com/I2tWE.png [2]: https://i.stack.imgur.com/OdCXY.png [3]: https://i.stack.imgur.com/tV13w.png
Some SO users may think that this question is "opinion-based" and try to close it. I think this is a valid question, as proven by the fact that the other similar post you linked, [Relative imports for the billionth time](https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time), addresses the same technical problem and has no exact answer. I will therefore give you my technical answer, by no means intended to be "the most appropriate solution". I think that **relative imports in Python are badly designed and best avoided**<sup>1</sup>. My **personal solutions** are either one of the following. ### 1. Use `importmonkey` If you are ok with an extra package, use [`importmonkey`](https://github.com/hirsimaki-markus/importmonkey) to allow a sort of "relative imports" without all the faff. This is in a nutshell just a wrapper of `sys.path.append`, but it makes it more portable. For example: ``` root ├─ subfolder │ └─ subscript.py └─ rootscript.py ``` If I have this in `rootscript.py`: ```python def somerootfunction(): print("I'm a function at the root of the project") ``` I can do this from `subscript.py` to call it: ```python from importmonkey import add_path; add_path("../") from rootscript import somerootfunction somerootfunction() ``` Not ideal, but it works. Linting and debugging are retained. I find it much simpler, effective, and easy to communicate than "the _appropriate solutions_"<sup>2</sup>. ### 2. Use symbolic links **Alternatively**, _in limited cases and after having considered a folder restructure or the other alternatives_<sup>2</sup>, use **symbolic links**. You can link an entire parent folder placing a link in some sub folder, so Python can finally access it. If you use a **relative path** during the link creation, this will also make the project portable. Editing the linked code files by opening the items from the link will simply modify the source files. Finally, [Git treats symbolic links as just another file that points to a location](https://stackoverflow.com/a/954575/3873799), so there is no issue with versioning. The main downside is that this solution is OS-dependent (i.e. a symbolic link created on Linux won't be read as one if you open your project in Windows). There aren't workarounds to this, as far as I'm aware. It hasn't been an issue for my projects. - On Linux, I personally use [vscode-symlink](https://marketplace.visualstudio.com/items?itemName=anbuselvan.vscode-symlink) to make it easy from VSCode. Otherwise just use `ln -s source_path destination_path`. - On Windows, you can use `mklink`. If you're using it from a non-cmd terminal (e.g. the VSCode terminal) remember to [prepend `cmd /c`][1] before using it. E.g., if I'm in the sub-folder and I want to link to a parent folder file, `cmd /c mklink rootscript.py ..\rootscript.py`. You can link to a directory [with the `/D` argument][2]. ________________________ <sup> <sup>1</sup> I think this is proven by the fact that there are many proposed solutions and much confusion on the subject. I'm listing some in **note 2** below. As a personal addendum, I do believe that this is another proof that Python is a good and easy scripting language, but a flawed and hard to learn programming language. I don't think it should be used for building large programs. For my large software projects I use other languages, and if I need Python functions or utilities (likely e.g. when dealing with Machine Learning), I use interop methods to call separate, small Python scripts.</sup> <sup><sup>2</sup> Probably, "the appropriate" solution is to be considered installing the project with a [`pyproject.toml`](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/) file and/or using [`flit`](https://flit.pypa.io/en/stable/rationale.html)/[`poetry`](https://python-poetry.org/docs/). This is what experienced people would probably suggest. However, there is still a lot of reading and learning required to do it. Also, there is much confusion on this solution, [due to this having replaced older standard solutions](https://stackoverflow.com/a/66472800/3873799). Others [believe](https://www.reddit.com/r/learnpython/comments/mjithh/comment/gtanb4i/?utm_source=share&utm_medium=web2x&context=3) that [pip install with editable](https://stackoverflow.com/a/76842303/3873799) is sufficient. Some suggest editing `PYTHONPATH`, which I think is downright bad. I think that these solutions are viable but complicate life immensely, especially for newcomers. Some of these will actually work but introduce issues or complications with either debugging and/or linting the import source, making the programming experience far from ideal.</sup> [1]: https://superuser.com/a/98480/327009 [2]: https://superuser.com/a/1766340/327009
I'm trying to use native api. I have the following code to get the 5 most used apps. It's a Native Android module with TypeScript because I can't call APIs with React Native: ``` fun getUsageData(startTime: Double, endTime: Double, successCallback: Callback) { val usageStatsManager = reactApplicationContext.getSystemService(Context.USAGE_STATS_SERVICE) as // Query usage stats val usageStatsList = usageStatsManager.queryUsageStats(UsageStatsManager.INTERVAL_DAILY, startTime.toLong(), endTime.toLong()) // Sort usage stats by total time in foreground usageStatsList.sortByDescending { it.totalTimeInForeground } // Extract package names of top 5 used apps val topApps: WritableArray = WritableNativeArray() for (i in 0 until minOf(5, usageStatsList.size)) { val packageName = usageStatsList[i].packageName topApps.pushString(packageName) } // // Pass the top apps list back to React Native successCallback.invoke(topApps) } ``` But my list is always empty in Typescript and I don't know why: `LOG Top 5 used apps: []` And this is my TypeScript code: ``` const getUsageData = () => { const startTime = new Date().getTime() - (7 * 24 * 60 * 60 * 1000); const endTime = new Date().getTime(); UsageStatsModule.getUsageData(Number(startTime), Number(endTime), (topApps : any) => { console.log('Top 5 used apps:', topApps); }); }; ``` I don't know how to start for debug, because I have no error and I didn't practice mobile development. Edit : I made a few changes to get more information but it wasn't conclusive: In the manifest.xml : <uses-permission android:name="android.permission.PACKAGE_USAGE_STATS" /> and in the module file : @ReactMethod fun getUsageData(startTime: Double, endTime: Double, successCallback: Callback) { try { val usageStatsManager: UsageStatsManager = this.reactApplicationContext.getSystemService(Context.USAGE_STATS_SERVICE) as UsageStatsManager val cal: Calendar = Calendar.getInstance() cal.add(Calendar.DAY_OF_MONTH, -1) // Query usage stats val usageStatsList = usageStatsManager.queryUsageStats(UsageStatsManager.INTERVAL_DAILY, cal.timeInMillis, System.currentTimeMillis()) ?: throw SecurityException("Permission denied or usage stats not available") // Sort usage stats by total time in foreground usageStatsList.sortByDescending { it.totalTimeInForeground } Log.d("a", "a : " + usageStatsList.size) successCallback.invoke(":" + startTime.toLong() + "-" + cal.timeInMillis + "-" + endTime.toLong() + "-" + System.currentTimeMillis()) return; // Extract package names of top 5 used apps val topApps: WritableArray = WritableNativeArray() for(i in 0..usageStatsList.size - 1) { // val packageName = usageStatsList[i].packageName // topApps.pushString(packageName) } // Pass the top apps list back to React Native successCallback.invoke(topApps) } catch (e: Exception) { // Handle exceptions e.printStackTrace() // Notify React Native about the error // You might want to handle this differently based on your app's requirements successCallback.invoke(WritableNativeArray().apply { pushString("Error: ${e.message}") }) } }
I want Query Function That I can pass in FindAll Query To Get Daynamic data Example: If I want to get data like `age>5` then direct I can get using query paramas This is just example At the end I want query funcatuion that can genrate dynamic query on query params I have made this funcation for daynamic sort, and paggination ``` const usersqquery = (q) => { const limit = q?.limit * 1 || 200; const page = q?.page * 1 || 1; const skip = (page - 1) * limit; const sort = q?.sort || "createdAt"; const sortBy = q?.sortBy || "DESC"; if (q?.limit) { return { order: [[sort, sortBy]], limit, offset: skip }; } return { order: [[sort, sortBy]] }; }; ``` I want more acurate funcation for genrate dynamic query
How Can I Make Dynamic Query In Sequelize with nodeJs
|node.js|express|sequelize.js|
null
For structural pattern matching, the class pattern requires parentheses around the class name. For example: <!-- language: python --> x = 0.0 match x: case int(): print('I') case float(): print('F') case _: print('Other')
# **Guys I'm about to lose it** ```plugins { id("com.android.application") id("org.jetbrains.kotlin.android") id("com.google.devtools.ksp") id("com.google.dagger.hilt.android") version "2.49" apply false } android { namespace = "com.example.notesapp" compileSdk = 34 defaultConfig { multiDexEnabled = true applicationId = "com.example.notesapp" minSdk = 30 targetSdk = 34 versionCode = 1 versionName = "1.0" testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner" vectorDrawables { useSupportLibrary = true } } buildTypes { release { isMinifyEnabled = false proguardFiles( getDefaultProguardFile("proguard-android-optimize.txt"), "proguard-rules.pro" ) } } compileOptions { sourceCompatibility = JavaVersion.VERSION_1_8 targetCompatibility = JavaVersion.VERSION_1_8 } kotlinOptions { jvmTarget = "1.8" } buildFeatures { compose = true } composeOptions { kotlinCompilerExtensionVersion = "1.5.1" } packaging { resources { excludes += "/META-INF/{AL2.0,LGPL2.1}" } } } dependencies { implementation("androidx.compose.material3:material3-android:1.2.1") implementation("androidx.compose.material3:material3-desktop:1.2.1") implementation("androidx.tv:tv-material:1.0.0-alpha10") implementation("androidx.wear.compose:compose-material:1.3.0") implementation("androidx.wear.compose:compose-material3:1.0.0-alpha19") val room_version = "2.6.1" val multidex_version = "2.0.1" implementation("androidx.core:core-ktx:1.12.0") implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.7.0") implementation("androidx.activity:activity-compose:1.8.2") implementation(platform("androidx.compose:compose-bom:2023.08.00")) implementation("androidx.compose.ui:ui") implementation("androidx.compose.ui:ui-graphics") implementation("androidx.compose.ui:ui-tooling-preview") implementation("androidx.compose.material3:material3") testImplementation("junit:junit:4.13.2") androidTestImplementation("androidx.test.ext:junit:1.1.5") androidTestImplementation("androidx.test.espresso:espresso-core:3.5.1") androidTestImplementation(platform("androidx.compose:compose-bom:2023.08.00")) androidTestImplementation("androidx.compose.ui:ui-test-junit4") debugImplementation("androidx.compose.ui:ui-tooling") debugImplementation("androidx.compose.ui:ui-test-manifest") implementation("androidx.multidex:multidex:$multidex_version") // Compose dependencies implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.7.0") implementation("androidx.navigation:navigation-compose:2.7.7") implementation("androidx.compose.material:material-icons-extended:") implementation("androidx.hilt:hilt-navigation-compose:1.2.0") implementation("androidx.compose.material:material:1.6.4") // Coroutines implementation("org.jetbrains.kotlinx:kotlinx-coroutines-android:1.7.3") // Dagger - Hilt implementation("com.google.dagger:hilt-android:2.49") ksp("com.google.dagger:hilt-android-compiler:2.44") // Room implementation("androidx.room:room-runtime:$room_version") ksp("androidx.room:room-compiler:$room_version") // optional - Kotlin Extensions and Coroutines support for Room implementation("androidx.room:room-ktx:$room_version") } ``` every time i run the project in android studio it gives me this error Caused by: org.gradle.workers.internal.DefaultWorkerExecutor$WorkExecutionException: A failure occurred while executing com.android.build.gradle.internal.tasks.CheckDuplicatesRunnable i tried checking for duplicate dependencies and toggled the offline mode still nothing works
Every Time i run the app it gives me an error related to gradle
|android|kotlin|gradle|
null
I fell into a gotcha: <!-- language: python --> match 0.0 case int: print(1) effectively redefines int, so the next time I tried the match I posted, it failed since _my_ int was shadowing the built in
The `BYROW` function applies a `LAMBDA` function to **each row** of a given array or range. As such, each iteration is referencing an entire row, even if the array only contains a single column. When using `=BYROW(SEQUENCE(7), LAMBDA(r, ...))`, the `SEQUENCE(7)` function returns an **array object** consisting of {1;2;3;4;5;6;7}. While you might expect the **r** variable to return a single numeric value from 1 to 7 for each row in the array, it actually returns another array object consisting of a single value from {1} to {7}. When the resulting array is then passed to either the *rows* or *columns* argument of the subsequent `SEQUENCE` function, Excel interprets this as an attempt to generate an array of arrays (a separate set of results for each value in the array), which is currently not supported. As such, only the first result will be returned. For example, `SEQUENCE(1, {5})` will only return 1. When using `=BYROW(A2#, LAMBDA(r, ...))`, where cell `A2` contains the formula `=SEQUENCE(7)`, the correct results are returned because `A2#` is a **range object** referring to a single column that exists in the worksheet. Each row in the range contains a single cell only; and, when a single cell is referenced, its value is returned (ie: 5 instead of {5}). Alternatively, the `MAP` function applies a `LAMBDA` function to **each value** of a given array. When using `=MAP(SEQUENCE(7), LAMBDA(n, ...))`, the **n** variable does in fact return a single numeric value from 1 to 7, because it processes each item in the array individually. This is why the `MAP` function can be a better option than `BYROW` or `BYCOL` when working with 1D arrays. Having said that, there are ways of overcoming the described issue with `BYROW` when working with a single column array. The `INDEX` function, as well as the implicit intersection operator `@`, can be used to reference the first item in each row and return a single value. For example, the **Try1A** formula mentioned in the OP can be modified to work as follows: =BYROW(SEQUENCE(7), LAMBDA(r, LET(n, INDEX(r, 1), SUM(SEQUENCE(, n, 1 + n / 10))))) -OR- =BYROW(SEQUENCE(7), LAMBDA(r, SUM(SEQUENCE(, @r, 1 + @r / 10)))) **Note:** when using the implicit intersection operator method shown above, you may be prompted with the following message: > This formula is not supported by some older versions of Excel. > > Would you like to use this variation instead? > > =@BYROW(SEQUENCE(7), LAMBDA(r, SUM(SEQUENCE(, @r, 1 + @r / 10)))) Click **No** to proceed.
declaration vs definition in relation to Memory
|c++|
null
Why do we have interfaces? ======== From a theoretical point of view, both interface implementation and class inheritance solve the same problem: They allow you to define a [subtype relationship](https://en.wikipedia.org/wiki/Subtyping) between types. So why do we have both in C#? Why do we need interfaces at all? Can't we just define an interface as an abstract class, just as we do, for example, in C++? The reason for this is [the diamond problem](https://en.wikipedia.org/wiki/Multiple_inheritance#The_diamond_problem): <sub>([Image source](https://commons.wikimedia.org/wiki/File:Diamond_inheritance.svg))</sub> [![enter image description here][1]][1] If both `B` and `C` implement `A.DoSomething()` differently, which implementation should `D` inherit? That's a hard problem, and the Java as well as the C# designers decided to avoid it by allowing multiple inheritance only for special base types which do not include any implementation. They decided to call these special base types **interfaces**. So, there is no "principle of interface". Interfaces are just a "tool" to solve a particular problem. So why do we need default implementations? === Backwards compatibility. You wrote a vastly successful library used by thousands of developers worldwide. Your library contains some interface `I`, and now you decide that you need an extra method `M` on it. The problem is: * You can't add another method `M` to `I`, because that would break existing classes implementing `I` (because they don't implement `M`), and * you can't change `I` to an abstract base class, because that, as well, would break existing classes implementing `I`, and you will lose the ability to do multiple inheritance. So how do default implementations avoid the diamond problem? === By not inheriting those default methods (example inspired by the one in [this article](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-8.0/default-interface-methods), see the full article for some interesting corner cases): interface I1 { void M() { Console.WriteLine("I1.M"); } // default method } interface I2 { void M() { Console.WriteLine("I2.M"); } // default method } class C : I1, I2 { } class Program { static void Main(string[] args) { // c, i1 and i2 reference the same object C c = new C(); I1 i1 = c; I2 i2 = c; i1.M(); // prints "I1.M" i2.M(); // prints "I2.M" c.M(); // compile error: class 'C' does not contain a member 'M' } } Can't I just use extension methods instead of default interface methods? ===== Default interface methods are not inherited, but they are still *virtual*. Now, what does that mean? It means that a class can *override* the behavior of a default interface method: ``` interface I { void M() { Console.WriteLine("I.M"); } // default method } class C1 : I { } class C2 : I { void I.M() { Console.WriteLine("C2.M"); } } class Program { static void Main(string[] args) { I i1 = new C1(); I i2 = new C2(); i1.M(); // prints "I.M" i2.M(); // prints "C2.M" } } ``` You cannot do that with extension methods. [1]: https://i.stack.imgur.com/Xy5O0.png
``` io.on("connection", (socket) => { console.log(`User Connected: ${socket.id}`); //joining Room socket.join(selectedOption); socket.broadcast.emit("userConn", `${username} joined ${selectedOption}`); socket.on('emitMessage', (data) => { if(data.room == selectedOption){ socket.broadcast.to(selectedOption).emit('message', {username: data.username , message: data.message}); } }); ``` For some reason after the first user has connected, when I login from the second user, they seem to connect twice hence `console.log(`User Connected: ${socket.id}`);` runs twice making the server side console looking like User Connected: pPNqnsAIJAUgDrpkAAAB User Connected: yZPm7IuY673r1hkUAAAD User Connected: yZPm7IuY673r1hkUAAAD This is also resulting in the first user receiving the second user's messages twice because they seem to be connected twice I was making a simple socket.io chat server, but the second user's messages were being received twice by the first user, upon further observation, i noticed that the second user connects to the server twice
User is connecting to socket.io server twice
|javascript|node.js|express|socket.io|
null
1. Return a pointer to the `Node` of interest instead of updating the out parameter `head_dest`. 1. The current implementation changes the original list. To create a new list you need to return a pointer to *copy* of the smallest node or NULL. To emphasize that I made the argument constant with `const Node *head`. 1. Finally, you can optimize the implementation of `pairWiseMinimumInNewList_Rec()` by observing that we return NULL in the base case. Otherwise we clone either first or 2nd node, and the recursive case is either 2 nodes ahead or we are done: ``` #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct Node { int val; struct Node *pt_next; } Node; Node *linked_list_clone(Node *n) { Node *n2 = malloc(sizeof *n2); if(!n2) { printf("malloc failed\n"); exit(1); } memcpy(n2, n, sizeof *n); return n2; } Node *linked_list_create(size_t n, int *vals) { Node *head = NULL; Node **cur = &head; for(size_t i=0; i < n; i++) { *cur = linked_list_clone(&(Node) {vals[i], NULL}); if(!head) head = *cur; cur = &(*cur)->pt_next; } return head; } void linked_list_print(Node *head) { for(; head; head=head->pt_next) printf("%d->", head->val); printf("NULL\n"); } void linked_list_free(Node *head) { while(head) { Node *tmp = head->pt_next; free(head); head=tmp; } } Node *pairWiseMinimumInNewList_Rec(const Node* head) { if(!head) return NULL; return linked_list_clone( &(Node) { (head->pt_next && head->val < head->pt_next->val) || (!head->pt_next) ? head->val : head->pt_next->val, pairWiseMinimumInNewList_Rec(head->pt_next ? head->pt_next->pt_next : NULL) } ); } int main() { Node *head=linked_list_create(7, (int []) {2,1,3,4,5,6,7}); Node* head_dest=pairWiseMinimumInNewList_Rec(head); linked_list_print(head); linked_list_free(head); linked_list_print(head_dest); linked_list_free(head_dest); } ``` Example run that exercises the main two code paths: ``` 2->1->3->4->5->6->7->NULL 1->3->5->7->NULL ```
Install python3-tk package seems to work for me. I'm on Ubuntu by parallels(M1 Mac). ``` sudo apt-get install python3-tk ```
I am following this [tutorial][1] to fetch the auth token for the HERE maps. I am able to fetch the token with my iOS app, However I cannot seem to get the token with Android. I keep getting the error `errorCode: '401300'. Signature mismatch. Authorization signature or client credential is wrong."` Below is my code to fetch the token : **HEREOAuthManager.java** <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> public void fetchOAuthToken(final HereTokenFetchListener callback) { String timestamp = String.valueOf(TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis())); String nonce = Util.randomStringGenerator(); String grant_type = "grant_type=client_credentials"; String oauth_consumer_key = "&oauth_consumer_key=" + clientID ; String oauth_nonce = "&oauth_nonce=" + nonce; String oauth_signature_method = "&oauth_signature_method=HMAC-SHA256"; String oauth_timestamp = "&oauth_timestamp=" + timestamp; String oauth_version = "&oauth_version=1.0"; String paramsString = grant_type + oauth_consumer_key + oauth_nonce + oauth_signature_method + oauth_timestamp + oauth_version; String baseString = "POST&" + Util.urlEncode(tokenEndpoint) + "&" + Util.urlEncode(paramsString); // Generate signature String secret = Util.urlEncode(clientSecret) + "&"; String signature = Util.calculateHmacSha256(secret, baseString); // Construct Authorization header String authString = "OAuth oauth_consumer_key=\"" + clientID + "\",oauth_nonce=\"" + nonce + "\",oauth_signature=\"" + Util.urlEncode(signature) + "\",oauth_signature_method=\"HMAC-SHA256\"," + "oauth_timestamp=\"" + timestamp + "\",oauth_version=\"1.0\""; // Create HTTP client OkHttpClient client = new OkHttpClient(); // Create request body RequestBody requestBody = new FormBody.Builder() .add("grant_type", "client_credentials") .build(); // // Create HTTP request Request request = new Request.Builder() .url(tokenEndpoint) .post(requestBody) .addHeader("Authorization", authString) .addHeader("Content-Type", "application/x-www-form-urlencoded") .build(); } <!-- end snippet --> **Utils.java** <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> public static String calculateHmacSha256(String secret, String data) { try { String secretWithAmpersand = secret + "&"; SecretKeySpec secretKeySpec = new SecretKeySpec(secret.getBytes(StandardCharsets.UTF_8), "HmacSHA256"); Mac mac = Mac.getInstance("HmacSHA256"); mac.init(secretKeySpec); byte[] hmacData = mac.doFinal(data.getBytes(StandardCharsets.UTF_8)); String baseEncodedSignature = Base64.getEncoder().encodeToString(hmacData); return urlEncode(baseEncodedSignature); } catch (NoSuchAlgorithmException | InvalidKeyException e) { e.printStackTrace(); return null; } } public static String urlEncode(String stringToEncode) { try { return java.net.URLEncoder.encode(stringToEncode, "UTF-8") .replace("+", "%20") // Replace '+' with '%20' .replace("=", "%3D") .replace("*", "%2A") // Replace '*' with '%2A' .replace("&", "%26") .replace( "~","%7E"); // Replace '~' with '%7E'; } catch (java.io.UnsupportedEncodingException e) { e.printStackTrace(); return null; } } <!-- end snippet --> What could I be missing? Any help is appreciated [1]: https://www.here.com/docs/bundle/identity-and-access-management-developer-guide/page/topics/sdk.html
Signature mismatch. Authorization signature or client credential is wrong with Android
|android|here-api|signature|heremaps-android-sdk|hmacsha256|
A windowed function is processed at the same time the SELECT is. More specifically, this is the order of operations: FROM and JOINS WHERE GROUP BY HAVING SELECT notice how SELECT is processed last. Therefore your queries are not the same. Your first query is filtering before the windowed function. The second query is filtering after.
I'm trying to use native api. I have the following code to get the 5 most used apps. It's a Native Android module with TypeScript because I can't call APIs with React Native: ``` fun getUsageData(startTime: Double, endTime: Double, successCallback: Callback) { val usageStatsManager = reactApplicationContext.getSystemService(Context.USAGE_STATS_SERVICE) as // Query usage stats val usageStatsList = usageStatsManager.queryUsageStats(UsageStatsManager.INTERVAL_DAILY, startTime.toLong(), endTime.toLong()) // Sort usage stats by total time in foreground usageStatsList.sortByDescending { it.totalTimeInForeground } // Extract package names of top 5 used apps val topApps: WritableArray = WritableNativeArray() for (i in 0 until minOf(5, usageStatsList.size)) { val packageName = usageStatsList[i].packageName topApps.pushString(packageName) } // // Pass the top apps list back to React Native successCallback.invoke(topApps) } ``` But my list is always empty in Typescript and I don't know why: `LOG Top 5 used apps: []` And this is my TypeScript code: ``` const getUsageData = () => { const startTime = new Date().getTime() - (7 * 24 * 60 * 60 * 1000); const endTime = new Date().getTime(); UsageStatsModule.getUsageData(Number(startTime), Number(endTime), (topApps : any) => { console.log('Top 5 used apps:', topApps); }); }; ``` I don't know how to start for debug, because I have no error and I didn't practice mobile development. Edit : I made a few changes to get more information but it wasn't conclusive: In the manifest.xml : <uses-permission android:name="android.permission.PACKAGE_USAGE_STATS" /> and in the module file : @ReactMethod fun getUsageData(startTime: Double, endTime: Double, successCallback: Callback) { try { val usageStatsManager: UsageStatsManager = this.reactApplicationContext.getSystemService(Context.USAGE_STATS_SERVICE) as UsageStatsManager val cal: Calendar = Calendar.getInstance() cal.add(Calendar.DAY_OF_MONTH, -1) // Query usage stats val usageStatsList = usageStatsManager.queryUsageStats(UsageStatsManager.INTERVAL_DAILY, cal.timeInMillis, System.currentTimeMillis()) ?: throw SecurityException("Permission denied or usage stats not available") // Sort usage stats by total time in foreground usageStatsList.sortByDescending { it.totalTimeInForeground } Log.d("a", "a : " + usageStatsList.size) // Extract package names of top 5 used apps val topApps: WritableArray = WritableNativeArray() for(i in 0..usageStatsList.size - 1) { // val packageName = usageStatsList[i].packageName // topApps.pushString(packageName) } // Pass the top apps list back to React Native successCallback.invoke(topApps) } catch (e: Exception) { // Handle exceptions e.printStackTrace() // Notify React Native about the error // You might want to handle this differently based on your app's requirements successCallback.invoke(WritableNativeArray().apply { pushString("Error: ${e.message}") }) } }
# Definition of STDIN, STDOUT, and STDERR <br> The OS' kernel of all operating systems use these 3 main **I/O (Input/Output)** streams, and these are **STDIN**, **STDOUT**, and **STDERR**. **STDIN** is the **I/O** stream that is processing information related to input, **STDOUT** is the **I/O** stream that is processing information related to output, and **STDERR** is the **I/O** stream that is processing information related to errors. All services, applications, and components of the OS' kernel use these streams to manipulate **I/O** information on any OS. <br> <br> # Solution using STDOUT redirection In the code provided by you, the issue defined is that you want to process the information provided by the operation of a **Batch** script without redirecting the **STDOUT** stream. The ways in which the **C#** application can read the output of the **Batch** script is either by using **files** (**e.g.** JSON configuration files), **Sockets**, **Pipes**, and the **STDOUT** stream. The aforementioned methods are **Inter-Process** communication methods ( You can check this link for more information: https://stackoverflow.com/a/76196178/16587692 ). These methods must be used because there is no direct communication channel between the **C#** application and the **Batch** script. In this situation the most viable method is to use the **STDOUT** stream. You can preserve the console colors by changing them through the **C#** application rather than changing them by passing arguments to the **Batch** script. You can also write the output in real time to the desired log file, but as a word of caution, do not read from the log file while the C# application is writing to that file because it will trigger a **Race Condition** and this may corrupt your **HDD/SSD** cells where the file is located. ``` using System.Text; using System.Diagnostics; namespace Test { class Program { static void Main(string[] args) { Operation().Wait(); Console.ReadLine(); } private static void SetPermissions(string file_name) { #pragma warning disable CA1416 // Validate platform compatibility // CHECK IF THE CURRENT OS IS WINDOWS if (System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(System.Runtime.InteropServices.OSPlatform.Windows) == true) { // GET THE SPECIFIED FILE INFORMATION OF THE SELECTED FILE FileInfo settings_file_info = new FileInfo(file_name); // GET THE ACCESS CONTROL INFORMATION OF THE SELECTED FILE AND STORE THEM IN A 'FileSecurity' OBJECT System.Security.AccessControl.FileSecurity settings_file_security = settings_file_info.GetAccessControl(); // ADD THE ACCESS RULES THAT ALLOW READ, WRITE, AND DELETE PERMISSIONS ON THE SELECTED FILE FOR THE CURRENT USER settings_file_security.AddAccessRule(new System.Security.AccessControl.FileSystemAccessRule(System.Security.Principal.WindowsIdentity.GetCurrent().Name, System.Security.AccessControl.FileSystemRights.Write, System.Security.AccessControl.AccessControlType.Allow)); settings_file_security.AddAccessRule(new System.Security.AccessControl.FileSystemAccessRule(System.Security.Principal.WindowsIdentity.GetCurrent().Name, System.Security.AccessControl.FileSystemRights.Read, System.Security.AccessControl.AccessControlType.Allow)); settings_file_security.AddAccessRule(new System.Security.AccessControl.FileSystemAccessRule(System.Security.Principal.WindowsIdentity.GetCurrent().Name, System.Security.AccessControl.FileSystemRights.Delete, System.Security.AccessControl.AccessControlType.Allow)); // UPDATE THE ACCESS CONTROL SETTINGS OF THE FILE BY SETTING THE // MODIFIED ACCESS CONTROL SETTINGS AS THE CURRENT SETTINGS settings_file_info.SetAccessControl(settings_file_security); } else { // IF THE OS IS A UNIX BASED OS, SET THE FILE PERMISSIONS FOR READ AND WRITE OPERATIONS // WITH THE 'UnixFileMode.UserRead | UnixFileMode.UserWrite' BITWISE 'OR' OPERATION File.SetUnixFileMode(file_name, UnixFileMode.UserRead | UnixFileMode.UserWrite); } #pragma warning restore CA1416 // Validate platform compatibility } private static async Task<bool> Operation() { // Process object System.Diagnostics.Process proc = new System.Diagnostics.Process(); string file_name = String.Empty; string arguments = String.Empty; // CHECK IF THE CURRENT OS IS WINDOWS AND SET THE FILE PATH AND ARGUMETS ACCORDINGLY if(System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(System.Runtime.InteropServices.OSPlatform.Windows) == true) { file_name = @"C:\Users\teodo\PycharmProjects\Test\.venv\Scripts\python.exe"; arguments = @"C:\Users\teodo\PycharmProjects\Test\main.py"; } else { file_name = @"python3"; arguments = @"/mnt/c/Users/teodo/PycharmProjects/Test/main.py"; } // Path where the python executable is located proc.StartInfo.FileName = file_name; // Path where python executable is located proc.StartInfo.Arguments = arguments; // Start the process proc.Start(); // Named pipe server object with an "In" direction. This means that this pipe can only read messages. On Windows it creates a pipe at the // '\\.\pipe\pipe-sub-directory\pipe-name' virtual directory, on Linux it creates a Unix Named Socket in the '/tmp' directory System.IO.Pipes.NamedPipeServerStream fifo_pipe_connection = new System.IO.Pipes.NamedPipeServerStream("/tmp/fifo_pipe", System.IO.Pipes.PipeDirection.In); // Create a backlog text file if none is existent, set its permissions as Read/Write, // and create a stream that allows direct read and write operations to the file FileStream fs = File.Open("backlog.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite); SetPermissions("backlog.txt"); try { // Wait for a client to connect synchronously fifo_pipe_connection.WaitForConnection(); while (true) { // Initiate a binary buffer byte array with a size of 1 Kb byte[] buffer = new byte[1024]; // Read the received bytes into the buffer byte array and also // store the number of bytes read into an integer named 'read' int read = await fifo_pipe_connection.ReadAsync(buffer, 0, buffer.Length); // Write the received bytes into the 'backlog.txt' file await fs.WriteAsync(buffer, 0, read); // Flush the bytes within the stream's buffer into the file await fs.FlushAsync(); // If the number of bytes read is equal to '0' there are no bytes // left to read on the Pipe's stream and the read loop is closed if (read == 0) { break; } } } catch { } finally { fs?.DisposeAsync(); fifo_pipe_connection?.DisposeAsync(); } return true; } } } ``` # STDOUT redirection method output [![Program STDOUT output][1]][1] [![Program STDOUT file output][2]][2] # Definition of FIFO Pipes **FIFO Pipes**, also known as **Named Pipes**, are a type of socket that use the operating system's file system in order to facilitate information exchange between applications. **FIFO Pipes** are categorised into 3 categories and these are, **Inward Pipes**, **Outward Pipes**, and **Two-way Pipes**. **Inward Pipes** are pipes on which the **Pipe Server** can only receive information from the **Pipe Clients**, **Outward Pipes** are pipes on which the **Pipe Server** can only send information to the **Pipe Clients**, and **Two-way Pipes** are pipes on which the **Pipe Server** can both send and receive information to and from the **Pipe Clients**. # Cross-platform solution using FIFO Pipes If the integrity of the output must be preserved, FIFO pipes are the best option for an inter-process communication method. In this scenario an **Inward Pipe** will be the best solution due to the fact that the **C#** application must receive information from the **Python** application. In order to have cross-platform capabilities, both the **C#** and **Python** application use conditional statements to verify which OS platform the applications are running on. <br> ## C# Application ``` using System.Text; using System.Diagnostics; namespace Test { class Program { static void Main(string[] args) { Operation().Wait(); Console.ReadLine(); } private static void SetPermissions(string file_name) { #pragma warning disable CA1416 // Validate platform compatibility // CHECK IF THE CURRENT OS IS WINDOWS if (System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(System.Runtime.InteropServices.OSPlatform.Windows) == true) { // GET THE SPECIFIED FILE INFORMATION OF THE SELECTED FILE FileInfo settings_file_info = new FileInfo(file_name); // GET THE ACCESS CONTROL INFORMATION OF THE SELECTED FILE AND STORE THEM IN A 'FileSecurity' OBJECT System.Security.AccessControl.FileSecurity settings_file_security = settings_file_info.GetAccessControl(); // ADD THE ACCESS RULES THAT ALLOW READ, WRITE, AND DELETE PERMISSIONS ON THE SELECTED FILE FOR THE CURRENT USER settings_file_security.AddAccessRule(new System.Security.AccessControl.FileSystemAccessRule(System.Security.Principal.WindowsIdentity.GetCurrent().Name, System.Security.AccessControl.FileSystemRights.Write, System.Security.AccessControl.AccessControlType.Allow)); settings_file_security.AddAccessRule(new System.Security.AccessControl.FileSystemAccessRule(System.Security.Principal.WindowsIdentity.GetCurrent().Name, System.Security.AccessControl.FileSystemRights.Read, System.Security.AccessControl.AccessControlType.Allow)); settings_file_security.AddAccessRule(new System.Security.AccessControl.FileSystemAccessRule(System.Security.Principal.WindowsIdentity.GetCurrent().Name, System.Security.AccessControl.FileSystemRights.Delete, System.Security.AccessControl.AccessControlType.Allow)); // UPDATE THE ACCESS CONTROL SETTINGS OF THE FILE BY SETTING THE // MODIFIED ACCESS CONTROL SETTINGS AS THE CURRENT SETTINGS settings_file_info.SetAccessControl(settings_file_security); } else { // IF THE OS IS A UNIX BASED OS, SET THE FILE PERMISSIONS FOR READ AND WRITE OPERATIONS // WITH THE 'UnixFileMode.UserRead | UnixFileMode.UserWrite' BITWISE 'OR' OPERATION File.SetUnixFileMode(file_name, UnixFileMode.UserRead | UnixFileMode.UserWrite); } #pragma warning restore CA1416 // Validate platform compatibility } private static async Task<bool> Operation() { // Process object System.Diagnostics.Process proc = new System.Diagnostics.Process(); string file_name = String.Empty; string arguments = String.Empty; // CHECK IF THE CURRENT OS IS WINDOWS AND SET THE FILE PATH AND ARGUMETS ACCORDINGLY if(System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(System.Runtime.InteropServices.OSPlatform.Windows) == true) { file_name = @"C:\Users\teodo\PycharmProjects\Test\.venv\Scripts\python.exe"; arguments = @"C:\Users\teodo\PycharmProjects\Test\main.py"; } else { file_name = @"python3"; arguments = @"/mnt/c/Users/teodo/PycharmProjects/Test/main.py"; } // Path where the python executable is located proc.StartInfo.FileName = file_name; // Path where python executable is located proc.StartInfo.Arguments = arguments; // Start the process proc.Start(); // Named pipe server object with an "In" direction. This means that this pipe can only read messages. On Windows it creates a pipe at the // '\\.\pipe\pipe-sub-directory\pipe-name' virtual directory, on Linux it creates a Unix Named Socket in the '/tmp' directory System.IO.Pipes.NamedPipeServerStream fifo_pipe_connection = new System.IO.Pipes.NamedPipeServerStream("/tmp/fifo_pipe", System.IO.Pipes.PipeDirection.In); // Create a backlog text file if none is existent, set its permissions as Read/Write, // and create a stream that allows direct read and write operations to the file FileStream fs = File.Open("backlog.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite); SetPermissions("backlog.txt"); try { // Wait for a client to connect synchronously fifo_pipe_connection.WaitForConnection(); while (true) { // Initiate a binary buffer byte array with a size of 1 Kb byte[] buffer = new byte[1024]; // Read the received bytes into the buffer byte array and also // store the number of bytes read into an integer named 'read' int read = await fifo_pipe_connection.ReadAsync(buffer, 0, buffer.Length); // Print the bytes sent by the Python application on the pipe Console.WriteLine(Encoding.UTF8.GetString(buffer)); // Write the received bytes into the 'backlog.txt' file await fs.WriteAsync(buffer, 0, read); // Flush the bytes within the stream's buffer into the file await fs.FlushAsync(); // If the number of bytes read is equal to '0' there are no bytes // left to read on the Pipe's stream and the read loop is closed if (read == 0) { break; } } } catch { } finally { fs?.DisposeAsync(); fifo_pipe_connection?.DisposeAsync(); } return true; } } } ``` <br> ## Python Application ``` import os import sys import time from rich.progress import track import platform import socket fifo_write = None unix_named_pipe = None def operation(): try: write("!!! Python Test !!!\n\n") # Simulate work being done set_range = range(20) for i in track(set_range, description="Processing..."): time.sleep(0.1) # SEND THE CURRENT PROGRESS AS A PERCENTAGE OVER THE PIPE write("Processing..." + str((100 / set_range.stop) * (i + 1)) + "%\n") # CHANGE THE TERMINAL COLOR TO GREY if platform.system() == "Windows": os.system("color 08") else: print("\n\n") os.system(r"echo '\e[91m!!! COLOR !!!'") # PAUSE THE CURRENT THREAD FOR 2 SECONDS time.sleep(2) # CHANGE THE TERMINAL COLOR TO WHITE if platform.system() == "Windows": os.system("color F") else: os.system(r"echo '\e[00m!!! COLOR !!!'") print("\n\n") write("[ Finished ]") except KeyboardInterrupt: sys.exit(0) def write(msg): if platform.system() == "Windows": fifo_pipe_write(msg) else: unix_named_socket_write(msg) def fifo_pipe_write(msg): try: global fifo_write if fifo_write is not None: # WRITE THE STRING PASSED TO THE FUNCTION'S AS AN ARGUMENT # IN THE FIFO PIPE FILE USING THE GLOBAL STREAM fifo_write.write(msg) except KeyboardInterrupt: sys.exit(0) def unix_named_socket_write(msg): try: global unix_named_pipe if unix_named_pipe is not None: unix_named_pipe.send(str(msg).encode(encoding="utf-8")) except KeyboardInterrupt: pass def stream_finder() -> bool: is_found = False try: # INITIATE A PIPE SEARCH SEQUENCE FOR 10 SECONDS for t in range(0, 10): try: try: # IF THE OS IS WINDOWS SEARCH FOR A PIPE FILE if platform.system() == "Windows": # IF PIPE IS FOUND RETURN TRUE AND STORE THE OPENED PIPE FILE STREAM GLOBALLY global fifo_write fifo_write = open(r"\\.\pipe\tmp\fifo_pipe", "w") is_found = True break # ELSE, SEARCH FOR A NAMED UNIX SOCKET else: # IF SOCKET IS FOUND RETURN TRUE AND STORE THE OPENED SOCKET GLOBALLY global unix_named_pipe unix_named_pipe = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) unix_named_pipe.connect("/tmp/fifo_pipe") is_found = True break except FileNotFoundError: # IF PIPE IS NOT FOUND print("\n\n[ Searching pipe ]") except OSError: # IF PIPE IS NOT FOUND print("\n\n[ Searching pipe ]") # MAKE THE LOOP WAIT 1 SECOND FOR EACH ITERATION time.sleep(1) except KeyboardInterrupt: sys.exit(0) return is_found if __name__ == "__main__": try: # INITIATE THE PIPE SEARCHING OPERATION found = stream_finder() # IF PIPE SEARCHING OPERATION IS SUCCESSFUL if found is True: print("\n\n[ Pipe found ]\n\n") # INITIATE THE MAIN OPERATION operation() except KeyboardInterrupt: sys.exit(0) ``` <br> # FIFO Pipes output Windows [![FIFO pipes output Windows][3]][3] [![FIFO Pipes file output Windows][4]][4] # FIFO Pipes output Linux ## Linux OS Details [![Linux OS details][5]][5] ## Linux OS Output [![enter image description here][6]][6] [![FIFO Pipes file output Linux][7]][7] [1]: https://i.stack.imgur.com/SrWv9.gif [2]: https://i.stack.imgur.com/mFhJD.png [3]: https://i.stack.imgur.com/78Q2v.gif [4]: https://i.stack.imgur.com/7wc4B.jpg [5]: https://i.stack.imgur.com/SDLYE.jpg [6]: https://i.stack.imgur.com/P9il7.gif [7]: https://i.stack.imgur.com/tZRB0.jpg
I'm currently trying to learn how to make Minecraft mods in Intellij and I'm fresh off a set of tutorials on youtube but when I try to run Minecraft with the content I've added there just isn't anything there. The only evidence my mod has even loaded is the name of it in the Mods section. It is as if I just ran regular Minecraft. I can't help but notice that my Java classes are orange when my Java classes in the tutorial files are white. I copied over the Java classes from the tutorial mod I needed to my first mod. [![enter image description here][1]][1] I've looked through the internet but can't find anything that works. Any help would be greatly appreciated. [1]: https://i.stack.imgur.com/ureAs.png
null
I'm currently trying to learn how to make Minecraft mods in Intellij and I'm fresh off a set of tutorials on youtube but when I try to run Minecraft with the content I've added there just isn't anything there. The only evidence my mod has even loaded is the name of it in the Mods section. It is as if I just ran regular Minecraft. I can't help but notice that my Java classes are orange when my Java classes in the tutorial files are white. I copied over the Java classes from the tutorial mod I needed to my first mod. [![Directories][1]][1] I've looked through the internet but can't find anything that works. Any help would be greatly appreciated. [1]: https://i.stack.imgur.com/ureAs.png
> I use gem to install everything explictly ... RUN gem install syntax_tree By default `Rails` knows nothing about gems you have installed locally (via `gem install ...`). > First I make sure the syntax_tree gem is installed properly: ... You have no problem in `test.rb` because it's a pure non-Rails `Ruby` file - and locally installed gem was found by `ruby_gems`. > LoadError: cannot load such file -- syntax_tree If you want `syntax_tree` to be available in controller (or require it from there) - define this dependency in `Gemfile`.
null
I'm new to networking. I'm also learning the usage of eBPF. Currently I'm working on a project where I've to capture the inner packet of a openconnect traffic. This is my code: https://github.com/inspektors-io/xdp-tutorial/tree/nobin/xdp_dump_with_grpc xdp_dump.c ``` // Copyright (c) 2019 Dropbox, Inc. // Full license can be found in the LICENSE file. // XDP dump is simple program that dumps new IPv4 TCP connections through perf events. #include "bpf_helpers.h" // Ethernet header struct ethhdr { __u8 h_dest[6]; __u8 h_source[6]; __u16 h_proto; } __attribute__((packed)); // IPv4 header struct iphdr { __u8 ihl : 4; __u8 version : 4; __u8 tos; __u16 tot_len; __u16 id; __u16 frag_off; __u8 ttl; __u8 protocol; __u16 check; __u32 saddr; __u32 daddr; } __attribute__((packed)); // TCP header struct tcphdr { __u16 source; __u16 dest; __u32 seq; __u32 ack_seq; union { struct { // Field order has been converted LittleEndiand -> BigEndian // in order to simplify flag checking (no need to ntohs()) __u16 ns : 1, reserved : 3, doff : 4, fin : 1, syn : 1, rst : 1, psh : 1, ack : 1, urg : 1, ece : 1, cwr : 1; }; }; __u16 window; __u16 check; __u16 urg_ptr; }; __attribute__((packed)); // PerfEvent eBPF map BPF_MAP_DEF(perfmap) = { .map_type = BPF_MAP_TYPE_PERF_EVENT_ARRAY, .max_entries = 128, }; BPF_MAP_ADD(perfmap); // PerfEvent item struct perf_event_item { struct ethhdr eth_hdr; struct iphdr ip_hdr; // __u16 source; // __u16 dest; // __u32 seq; // __u32 ack_seq; struct tcphdr tcp_hdr; } __attribute__((packed)); _Static_assert(sizeof(struct perf_event_item) == 54, "wrong size of perf_event_item"); // XDP program SEC("xdp") int xdp_dump(struct xdp_md *ctx) { void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; __u64 packet_size = data_end - data; // L2 struct ethhdr *ether = data; if (data + sizeof(*ether) > data_end) { return XDP_ABORTED; } // L3 if (ether->h_proto != 0x08) { // htons(ETH_P_IP) -> 0x08 // Non IPv4 return XDP_PASS; } data += sizeof(*ether); struct iphdr *ip = data; if (data + sizeof(*ip) > data_end) { return XDP_ABORTED; } data += ip->ihl * 4; struct tcphdr *tcp = data; if (data + sizeof(*tcp) > data_end) { return XDP_ABORTED; } // Emit perf event for every ICMP packet if (ip->protocol) { // IPPROTO_TCP -> 6 struct perf_event_item evt = { .eth_hdr = *ether, .ip_hdr = *ip, .tcp_hdr = *tcp, // .src_ip = ip->saddr, // .dst_ip = ip->daddr, // .source = tcp->source, // .dest = tcp->dest, // .seq = tcp->seq, // .ack_seq = tcp->ack_seq, }; // flags for bpf_perf_event_output() actually contain 2 parts (each 32bit long): // // bits 0-31: either // - Just index in eBPF map // or // - "BPF_F_CURRENT_CPU" kernel will use current CPU_ID as eBPF map index // // bits 32-63: may be used to tell kernel to amend first N bytes // of original packet (ctx) to the end of the data. // So total perf event length will be sizeof(evt) + packet_size __u64 flags = BPF_F_CURRENT_CPU | (packet_size << 32); bpf_perf_event_output(ctx, &perfmap, flags, &evt, sizeof(evt)); } return XDP_PASS; } char _license[] SEC("license") = "GPL"; ``` Here's my userspace go program which captures the incoming packet on attached interface and parses the ethernet, ip, tcp header infos. It also spawns a gRPC client server to send the fetched infos. ``` package main import ( "bytes" "context" "encoding/binary" "encoding/hex" "errors" "flag" "fmt" "net" "os" "os/signal" "github.com/cilium/ebpf" "github.com/cilium/ebpf/perf" "github.com/vishvananda/netlink" "github.com/vishvananda/netlink/nl" pb "github.com/inspektors-io/grpc-nobin/grpc-test" // Update with your actual package name "google.golang.org/grpc" ) //go:generate go run github.com/cilium/ebpf/cmd/bpf2go -cc clang XdpDump ./bpf/xdp_dump.c -- -I../header var ( iface string conn *grpc.ClientConn ) const ( METADATA_SIZE = 12 ) type Collect struct { Prog *ebpf.Program `ebpf:"xdp_dump"` PerfMap *ebpf.Map `ebpf:"perfmap"` } type perfEventItem struct { EthHdr struct { DestMAC [6]uint8 SourceMAC [6]uint8 Proto uint16 } IpHdr struct { VersionIHL byte TOS byte TotalLen uint16 ID uint16 FragmentOff uint16 TTL uint8 Protocol uint8 Checksum uint16 SrcIP uint32 DstIP uint32 } Tcphdr struct { Source uint16 Dest uint16 Seq uint32 AckSeq uint32 Flags uint16 // For holding the flags field (4 bytes) Window uint16 Check uint16 UrgPtr uint16 } } func main() { flag.StringVar(&iface, "iface", "", "interface attached xdp program") flag.Parse() if iface == "" { fmt.Println("interface is not specified.") os.Exit(1) } link, err := netlink.LinkByName(iface) if err != nil { fmt.Printf("Failed to get interface by name: %v\n", err) os.Exit(1) } spec, err := LoadXdpDump() if err != nil { fmt.Printf("Failed to load XDP dump: %v\n", err) os.Exit(1) } var collect = &Collect{} if err := spec.LoadAndAssign(collect, nil); err != nil { fmt.Printf("Failed to load and assign XDP program: %v\n", err) os.Exit(1) } if err := netlink.LinkSetXdpFdWithFlags(link, collect.Prog.FD(), nl.XDP_FLAGS_SKB_MODE); err != nil { fmt.Printf("Failed to attach XDP program to interface: %v\n", err) os.Exit(1) } defer func() { if err := netlink.LinkSetXdpFdWithFlags(link, -1, nl.XDP_FLAGS_SKB_MODE); err != nil { fmt.Printf("Error detaching program: %v\n", err) } }() ctrlC := make(chan os.Signal, 1) signal.Notify(ctrlC, os.Interrupt) perfEvent, err := perf.NewReader(collect.PerfMap, 4096) if err != nil { fmt.Printf("Failed to create perf event reader: %v\n", err) os.Exit(1) } fmt.Println("All new TCP connection requests (SYN) coming to this host will be dumped here.") fmt.Println() var ( received int = 0 lost int = 0 counter int = 0 ) // Connect to gRPC server conn, err = grpc.Dial("localhost:50051", grpc.WithInsecure()) if err != nil { fmt.Printf("Failed to connect to gRPC server: %v\n", err) os.Exit(1) } defer conn.Close() // Create gRPC client client := pb.NewUserServiceClient(conn) go func() { var event perfEventItem for { evnt, err := perfEvent.Read() if err != nil { if errors.Is(err, perf.ErrClosed) { break } fmt.Printf("Error reading perf event: %v\n", err) continue } reader := bytes.NewReader(evnt.RawSample) if err := binary.Read(reader, binary.LittleEndian, &event); err != nil { fmt.Printf("Error decoding perf event: %v\n", err) continue } // fmt.Printf("Ethernet Header:\n") // fmt.Printf(" Destination MAC: %02x:%02x:%02x:%02x:%02x:%02x\n", event.EthHdr.DestMAC[0], event.EthHdr.DestMAC[1], event.EthHdr.DestMAC[2], event.EthHdr.DestMAC[3], event.EthHdr.DestMAC[4], event.EthHdr.DestMAC[5]) // fmt.Printf(" Source MAC: %02x:%02x:%02x:%02x:%02x:%02x\n", event.EthHdr.SourceMAC[0], event.EthHdr.SourceMAC[1], event.EthHdr.SourceMAC[2], event.EthHdr.SourceMAC[3], event.EthHdr.SourceMAC[4], event.EthHdr.SourceMAC[5]) // fmt.Printf(" Protocol: %x\n", event.EthHdr.Proto) // fmt.Printf("IP Header:\n") // fmt.Printf(" Version IHL: %x\n", event.IpHdr.VersionIHL) // fmt.Printf(" TOS: %x\n", event.IpHdr.TOS) // fmt.Printf(" Total Length: %d\n", event.IpHdr.TotalLen) // fmt.Printf(" ID: %d\n", event.IpHdr.ID) // fmt.Printf(" Fragment Offset: %d\n", event.IpHdr.FragmentOff) // fmt.Printf(" TTL: %d\n", event.IpHdr.TTL) // fmt.Printf(" Protocol: %d\n", event.IpHdr.Protocol) // fmt.Printf(" Checksum: %d\n", event.IpHdr.Checksum) // fmt.Printf(" Source IP: %s\n", intToIPv4(event.IpHdr.SrcIP).String()) // fmt.Printf(" Destination IP: %s\n", intToIPv4(event.IpHdr.DstIP).String()) fmt.Printf("TCP Header:\n") // fmt.Printf(" Source Port: %d\n", ntohs(event.Tcphdr.Source)) // fmt.Printf(" Destination Port: %d\n", ntohs(event.Tcphdr.Dest)) // Extracting flags flags := extractFlags(event.Tcphdr.Flags) fmt.Println("Extracted Flags:") fmt.Println("NS:", flags["ns"]) fmt.Println("RES:", flags["res"]) fmt.Println("DOFF:", flags["doff"]) fmt.Println("FIN:", flags["fin"]) fmt.Println("SYN:", flags["syn"]) fmt.Println("RST:", flags["rst"]) fmt.Println("PSH:", flags["psh"]) fmt.Println("ACK:", flags["ack"]) fmt.Println("URG:", flags["urg"]) fmt.Println("ECE:", flags["ece"]) fmt.Println("CWR:", flags["cwr"]) fmt.Printf(" Window: %d\n", event.Tcphdr.Window) fmt.Printf(" Checksum: %d\n", event.Tcphdr.Check) fmt.Printf(" Urgent Pointer: %d\n", event.Tcphdr.UrgPtr) counter++ fmt.Printf("Counter: %d\n", counter) rawData := evnt.RawSample[METADATA_SIZE:] if len(evnt.RawSample)-METADATA_SIZE > 0 { fmt.Println(hex.Dump(evnt.RawSample[METADATA_SIZE:])) rawData = evnt.RawSample[METADATA_SIZE:] } received += len(evnt.RawSample) lost += int(evnt.LostSamples) // Send data to gRPC server err = sendDataToServer(client, int32(counter), event, rawData) if err != nil { fmt.Printf("Failed to send data to gRPC server: %v\n", err) continue } fmt.Println("Data sent successfully to gRPC server") } }() defer conn.Close() <-ctrlC perfEvent.Close() fmt.Println("\nSummary:") fmt.Printf("\t%d Event(s) Received\n", received) fmt.Printf("\t%d Event(s) Lost(e.g. small buffer, delays in processing)\n", lost) fmt.Println("\nDetaching program and exiting...") } func sendDataToServer(client pb.UserServiceClient, packetNumber int32, event perfEventItem, rawDumpString []byte) error { // Create gRPC message types for TCP, IP, and Ethernet headers ipHeader := &pb.IpHeader{ SourceIp: event.IpHdr.SrcIP, DestinationIp: event.IpHdr.DstIP, VersionIhl: uint32(event.IpHdr.VersionIHL), Protocol: uint32(event.IpHdr.Protocol), Check: uint32(event.IpHdr.Checksum), // Ihl: uint32(event.IPHeader.IHL), FragOff: uint32(event.IpHdr.FragmentOff), Id: uint32(event.IpHdr.ID), Tos: uint32(event.IpHdr.TOS), Ttl: uint32(event.IpHdr.TTL), TotLen: uint32(event.IpHdr.TotalLen), } tcpHeader := &pb.TcpHeader{ SourcePort: uint32(event.Tcphdr.Source), DestinationPort: uint32(event.Tcphdr.Dest), Seq: uint32(event.Tcphdr.Seq), AckSeq: uint32(event.Tcphdr.AckSeq), Flag: uint32(event.Tcphdr.Flags), Window: uint32(event.Tcphdr.Window), Check: uint32(event.Tcphdr.Check), UrgPtr: uint32(event.Tcphdr.UrgPtr), } ethernetHeader := &pb.EthernetHeader{ EtherType: uint32(event.EthHdr.Proto), DestinationMac: event.EthHdr.DestMAC[:], SourceMac: event.EthHdr.SourceMAC[:], } // Convert raw binary data to hexadecimal string // rawDumpHex := hex.EncodeToString([]byte(rawDumpString)) // Send data to server _, err := client.SendUserData(context.Background(), &pb.UserRequest{ IpHeader: ipHeader, TcpHeader: tcpHeader, EthernetHeader: ethernetHeader, PacketNumber: packetNumber, RawData: rawDumpString, // Send hexadecimal string instead of raw binary }) return err } func intToIPv4(ip uint32) net.IP { res := make([]byte, 4) binary.LittleEndian.PutUint32(res, ip) return net.IP(res) } func ntohs(value uint16) uint16 { return ((value & 0xff) << 8) | (value >> 8) } func extractFlags(flags uint16) map[string]uint16 { result := make(map[string]uint16) result["cwr"] = (flags >> 15) & 0x1 result["ece"] = (flags >> 14) & 0x1 result["urg"] = (flags >> 13) & 0x1 result["ack"] = (flags >> 12) & 0x1 result["psh"] = (flags >> 11) & 0x1 result["rst"] = (flags >> 10) & 0x1 result["syn"] = (flags >> 9) & 0x1 result["fin"] = (flags >> 8) & 0x1 result["doff"] = (flags >> 4) & 0xF result["res"] = (flags >> 1) & 0x7 result["ns"] = flags & 0x1 return result } ``` I want to extend this program to also detect the openconnect packets and decapsulate and capture the inner ip packets destination address.
Detect and capture openconnect traffic using eBPF/XDP
|packet-capture|bpf|xdp-bpf|openconnect|
null
null
null
null
Just use a normal **window.onerror** event to catch it... <!DOCTYPE html><html><head></head><body> <script> window.onerror=(e)=>{alert(e);}; setTimeout(function(){ console.log(window.frames.testframe.location.href); },2000); </script> <iframe name="test_frame" id="test_frame" src="https://orthodoxchurchfathers.com"></iframe> </body></html> If you want to go further you can also... window.onunhandledrejection=(e)=>{alert(e.reason);}; Or maybe *e.reason.message* ? ...can't remember.
I do not understand how the following operations `2s – 1` and `1 - 2s` are performed in the following expressions: ```vhdl R <= (s & '0') - 1; L <= 1-(s & '0'); ``` Considering the fact that `R` and `L` are of type `signed(1 downto 0)` and `s` of type `std_logic`. I have extracted them from a `vhdl` code snippet in my professor's notes. __What I understand (or at least consider to understand – premises of my reasoning)__ 1. The concatenation with the `0` literal achieves the product by `2` (That is what shifting to the left does). 2. The concatenation also achieves a `std_logic_vector` of 2 bits (not so sure about that, I inferred this from a comment in the following [StackOverflow question]( https://stackoverflow.com/questions/18689477/how-to-add-std-logic-using-numeric-std) 3. "`std_logic_vector` is great for implementing data buses, it’s useless for performing arithmetic operations" - source: [vhdlwhiz](https://vhdlwhiz.com/signed-unsigned/#:~:text=Finally%2C%20signed%20and%20unsigned%20can,can%20only%20have%20number%20values). __What baffles me:__ 1. What type does the compiler interpret the `1` literal is? - An integer? If so, can an `integer` be used without casting in an arithmetic expression with a `std_logic_vector`? This option doesn't seem very plausible to me... - Assuming the fact that the `(s & '0')` is indeed interpreted as a `std_logic_vector` (second premise) it also comes to my mind the possibility that the compiler, based on the type of the other operand in the expression (i.e., `(s & '0')`), inferred `1` to be of type `std_logic_vector` as well. However, even though both `(s & '0')` and `1` were interpreted as `std_logic_vector` they should not be behaving correctly according to my third premise. - A thought that comes to my mind in order tu justify the possibility of both operands being of type `std_logic_vector` is that both `(s & '0')` and `1` are implicitly casted to `signed` by the compiler because it acknowledges the fact the signal in which the result is stored is of type `signed`. This, doesn't seem to make sense to me `R <= (s & '0') - 1;` --suppose `s` is equal to `1` Both are converted to `std_logic_vectors(1 downto 0)` `R <= "10" - "01"` Now, if the contents of the std_logic_vectors were interpreted as `signed` the result of the subtraction would be `R <= (-2) - (1) = -3` As you can tell I am really confused. I believe we've only scratched the surface when it comes to discussing data types in class and I am encountering a lot of problems when solving problems because choosing the wrong data types I sincerely apologize for the questions not being as clear as I would, but they are only a reflection of my understanding on the subject. I appreciate your patience.
Yes, there is a difference. When you are using a map the JVM will instantiate an additional object (`mappings`), it will take some additional memory. And operator function `map.get` will provide the result based on key object hash code. So for user there is no difference but not for the JVM.
I cannot execute a procedure stored on mssql server from django. Here is the configuration for the db<br/> DATABASES = { 'default': { 'ENGINE': 'mssql', 'NAME': os.environ.get('DB_NAME'), 'USER': os.environ.get('DB_USER'), 'PASSWORD': os.environ.get('DB_PASSWORD'), 'HOST': os.environ.get('DB_HOST'), 'OPTION':{ 'driver': 'ODBC Driver 17 for SQL Server', 'trusted_connection': 'yes' } } } In my view: def test(request): if request.method == "POST": with connection.cursor() as cursor: P1 = request.data["p1"] P2 = request.data["p2"] try: cursor.callproc('[dbo].[SP_TEXT]',[P1,P2]) if cursor.return_value == 1: result_set = cursor.fetchall() print(result_set) finally: cursor.close() return Response({"msg":"post"}) else: return Response({"msg": "get"}) I use sql server standard 2022 <br/> Need help please
`make menuconfig` allows to tweak the Buildroot configuration, not the Linux kernel configuration. To tweak the Linux kernel configuration, run `make linux-menuconfig`, which will fire up the menuconfig of the Linux kernel. See https://bootlin.com/doc/training/buildroot/buildroot-slides.pdf starting slide 76 for more details on the Linux kernel configuration.
``` int sumInString(const char *str) { // valid str if (str == NULL || *str == '\0') { return 0; } int num = 0; int res = 0; int i = 0; while (str[i] != '\0') { if (str[i] >= '0' && str[i] <= '9') /*is numerical*/ { num *= 10; num += str[i] - '0'; // char->int conversion } else { res += num; num = 0; } ++i; } res += num; // add the remainder return res; } ```
How to dynamically update filters?
|r|shiny|
If composite indexing created - indexing is called?
In following example the struct is used to left join tbl2 to tbl1. If there are several matching entries in tbl1, take only one (`LIMIT 1`). Thus the row size of tbl1 keeps constant. The `SELECT AS STRUCT` puts all rows of tbl2 into one structure. ~~~SQL WITH tbl1 as (SELECT * FROM UNNEST([1,2]) AS x), tbl2 AS (SELECT "a" AS name, 1 AS val UNION ALL SELECT "b", 2 UNION ALL SELECT "bb", 2 ) SELECT *, (SELECT AS STRUCT * FROM (SELECT * FROM tbl2 WHERE x=val LIMIT 1) ) AS joined_table FROM tbl1 ~~~
I want to have on the first bar the bar_index of the last bar available on the chart. The problem is that the last_bar_index is not updated when a new candle starts, ex checking BTCUSDT on 1 min chart var arr_test_1 = array.new_float() var arr_test_2 = array.new_float() if array.size(arr_test_1) > 2 array.shift(arr_test_1) array.push(arr_test_1, last_bar_index ) // this is not updated when a new bar starts if bar_index == 0 if array.size(arr_test_2) > 2 array.shift(arr_test_2) array.push(arr_test_2, last_bar_index ) var label lbl_4 = na label.delete(lbl_4) lbl_4 := label.new (barstate.islast ? bar_index : na, y=close, xloc=xloc.bar_index, yloc=yloc.belowbar, style = label.style_label_up, size = size.large, color = color.new(color.aqua, 60), textcolor = color.new(color.black, 0), textalign = text.align_left, text = "arr_test_1 " + str.tostring( arr_test_1 ) + "\narr_test_2 " + str.tostring( arr_test_2 ) + "\nlast_bar_index " + str.tostring( last_bar_index ) ) [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/OH2fm.jpg
How to get last_bar_index on bar_index == 0
|pine-script|
There seems to be an undeleted value on the registry, but i have not been able to solve it yet. But, as a workaround to this problem, you can directly copy the file: C:\Program Files (x86)\Eziriz\.NET Reactor\VSPackage\17\ [Content_Types].xml to C:\Users\%CurrentUser%\AppData\Local\Microsoft\VisualStudio\17.0_17555130\Extensions This enables .Net Reactor inside VIsual Studio 2022. P.S. The directory : 17.0_17555130 is created for the version 17.9.5 if you have a newer or older version, then you will have a different directory name.
Say I have HTML that looks a bit like this <div> <p>Text 1</p> <p class="foo">Text 2</p> <p>Text 3</p> <p>Text 4</p> <p>Text 5</p> <p>Text 6</p> <p class="foo">Text 7</p> <p class="foo">Text 8</p> <p>Text 9</p> <p class="foo">Text 10</p> <p>Text 11</p> <p class="foo">Text 12</p> <p class="foo">Text 13</p> <p class="foo">Text 14</p> <p class="foo">Text 15</p> <p>Text 16</p> </div> Any p element with the class "foo" gets wrapped in a div but any contiguous p elements with the class "foo" get wrapped into the same div as the first p (in the series). So the above HTML would end up looking like <div> <p>Text 1</p> <div class="wrap"> <p class="foo">Text 2</p> </div> <p>Text 3</p> <p>Text 4</p> <p>Text 5</p> <p>Text 6</p> <div class="wrap"> <p class="foo">Text 7</p> <p class="foo">Text 8</p> </div> <p>Text 9</p> <div class="wrap"> <p class="foo">Text 10</p> </div> <p>Text 11</p> <div class="wrap"> <p class="foo">Text 12</p> <p class="foo">Text 13</p> <p class="foo">Text 14</p> <p class="foo">Text 15</p> </div> <p>Text 16</p> </div> I know how to get elements with the class "foo" but I'm lost trying to work out if there are contiguous p.foo elements that I need to wrap in the same div. Should I just iterate through the nodes in the container div, append a wrapper div when I find a p.foo and just keep appending contiguous p.foo elements? I've had a few goes but it seems to fail for different ways I sort the original p elements.
Grouping HTML elements by their class name
|javascript|html|class|element|group|
{"Voters":[{"Id":354577,"DisplayName":"Chris"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":1974224,"DisplayName":"Cristik"}],"SiteSpecificCloseReasonIds":[18]}
{"Voters":[{"Id":354577,"DisplayName":"Chris"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":1974224,"DisplayName":"Cristik"}],"SiteSpecificCloseReasonIds":[16]}
{"Voters":[{"Id":11002,"DisplayName":"tgdavies"},{"Id":992484,"DisplayName":"MadProgrammer"},{"Id":18157,"DisplayName":"Jim Garrison"}]}
can anyone help me downloading python in windows 10? it saying it is downloaded but could not get verified and I can not locate the file especially the exe. thats when I can not add it to path. please help. thanks try everything but did not work, watched bunch of videos, followed the instructions etc.
python could not get verified after installation
|python|
null
{"Voters":[{"Id":207421,"DisplayName":"user207421"},{"Id":10871900,"DisplayName":"dan1st might be happy again"},{"Id":18157,"DisplayName":"Jim Garrison"}]}
After you put the list in your question in A:B, and set the target dates (or date strings) in D2:D, you may want to use the formula like what I wrote below: ``` =ARRAYFORMULA( IF(D2:D<>"", FILTER(B2:B, (YEAR(DATEVALUE(A2:A))=YEAR(DATEVALUE(D2:D))) * (MONTH(DATEVALUE(A2:A))=MONTH(DATEVALUE(D2:D))) ), ) ) ``` The keys are 1. <a href="https://support.google.com/docs/answer/3093039">`DATEVALUE`</a> to cast strings into date values, so that you can use 2. <a href="https://support.google.com/docs/answer/3093061">`YEAR`</a> and <a href="https://support.google.com/docs/answer/3093052">`MONTH`</a> to extract the year and the month of both the list and the targets alternatively. Demo is noted here: https://docs.google.com/spreadsheets/d/1HcL1ZFJ-26HU9lZVKJufE63cSDCOtVWYXOAqwnJPn40/edit#gid=0
'pyodbc.Cursor' object has no attribute 'callproc', mssql with django
|sql-server|django|mssql-django|
I need something similar to a [`QToolBox`][1] with multiple expanding items / widgets that - as opposed to a `QToolBox` - supports displaying more than a single item at a time: When the user clicks on the item, it should expand; upon a second click, it should collapse: All, some or no items may be expanded at the same time. Vertical scrollbars should be added if there is not enough space for all expanded items to be shown. ![Sample screenshot][2] Does anyone have an idea / a solution how can I accomplish this? [1]: https://doc.qt.io/qt-6/qtoolbox.html [2]: http://i.stack.imgur.com/X4w12.png
Here is an extract of GTK3 code. I would like the translation with GTK4. GtkWidget *menu, *item1, *item2, *item3; menu = gtk_menu_new(); item1 = gtk_menu_item_new_with_label("Item 1"); item2 = gtk_menu_item_new_with_label("Item 2"); item3 = gtk_menu_item_new_with_label("Item 3"); g_signal_connect(item1, "activate", G_CALLBACK (on_popup_menu_selection), "Item 1"); g_signal_connect(item3, "activate", G_CALLBACK (on_popup_menu_selection), "Item 2"); g_signal_connect(item3, "activate", G_CALLBACK (on_popup_menu_selection), "Item 3"); gtk_menu_shell_append(GTK_MENU_SHELL(menu), item1); gtk_menu_shell_append(GTK_MENU_SHELL(menu), item2); gtk_menu_shell_append(GTK_MENU_SHELL(menu), item3); gtk_widget_show_all (menu); The GTK4 documentation is obsure for me on this subject, and I did not find example in tutorials.
I would like to write a simple popup contextual menu with C langage using GTK4. I did that with GTK3 but I am lost with hje way to do with GTK4
|popupmenu|gtk4|