instruction
stringlengths
0
30k
null
I've found a nice way to do this by resetting the autofocus on user tap event and it works really well. export default function CameraScreen({ route, navigation }: any) { const [isRefreshing, setIsRefreshing] = useState(false); const [focusSquare, setFocusSquare] = useState({ visible: false, x: 0, y: 0 }); useEffect(() => { if (isRefreshing) { setIsRefreshing(false); } }, [isRefreshing]); // Function to handle touch events const handleTouch = (event: any) => { const { locationX, locationY } = event.nativeEvent; setFocusSquare({ visible: true, x: locationX, y: locationY }); // Hide the square after 1 second setTimeout(() => { setFocusSquare((prevState) => ({ ...prevState, visible: false })); }, 1000); setIsRefreshing(true); }; return ( <GestureHandlerRootView style={{flex: 1}}> <PinchGestureHandler onGestureEvent={handlePinch} onHandlerStateChange={(event) => { if (event.nativeEvent.state === State.END) { const scale = event.nativeEvent.scale; setZoom((currentZoom) => scale * currentZoom); } }} > <View style={styles.container}> <Camera style={styles.camera} type={type} ref={camera} zoom={zoom} autoFocus={!isRefreshing ? AutoFocus.on : AutoFocus.off} onTouchEnd={handleTouch} // Handle touch to set focus point > </Camera> {focusSquare.visible && ( <View style={[ styles.focusSquare, { top: focusSquare.y - 25, left: focusSquare.x - 25 }, ]} /> )} </View> </PinchGestureHandler> </GestureHandlerRootView> ); } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', }, camera: { flex: 1, }, focusSquare: { position: 'absolute', width: 50, height: 50, borderWidth: 2, borderColor: 'white', backgroundColor: 'transparent', }, });
Google Drive Service Account gets googleapiclient.errors.HttpError: 401 "Request is missing required authentication credential" when authenticating
|python|google-oauth|google-api-python-client|
I've encountered similar issues while using Jupyter Notebook, especially when running lengthy loops that iterate rapidly. One workaround I've found effective is to introduce a short sleep duration (less than a second) after every X iterations within the loop. This seemingly trivial delay can often make the interrupt function work reliably, allowing you to halt the execution without resorting to closing and relaunching the notebook.
I have installed (ver 2.2.6) npm i asciidoctor and run asciidoctor -b xhtml5 -a source-highlighter=rouge .\README.adoc I do get the web page , but the source code is generated flat .. this is how the the adoc file would display on my intellij IDE with the pluggin [![enter image description here][1]][1] and this is how my web page would display it [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/cdHJV.png [2]: https://i.stack.imgur.com/oGOLn.png does anybody know how to generate the web page to have source code css to generate correctly ? * Addition I ran the following code after installing rouge == Introduction : :source-highlighter: rouge :rouge-style: monokai [source,ruby] ---- puts "Hello, Rouge!" ---- :source-highlighter: rouge [%linenums,ruby] ---- ORDERED_LIST_KEYWORDS = { 'loweralpha' => 'a', 'lowerroman' => 'i', 'upperalpha' => 'A', 'upperroman' => 'I' # Add more keywords here } ---- [source,xml] .... - id: b-security uri: lb://b-security predicates: - Path=/auth/** - id: c-service-A uri: lb://c-service-A predicates: - Path=/service-a/** filters: - AuthenticationFilter ....
Could you please tell me what is the purpose of `GetAll(){};` method. And, why when I change the name of method gives me an Error? Is it a type of reserved method, or how? I know that the method is dedicated to execute code so? why it does not execute code when I change it from GetAll() to anything()? ```php <? class araclar_model extends CI_Model { public $tableName = ""; public function __construct() { $this-> tableName = "tbl_araclar"; } public function GetAll() { //araçlar tablosundaki tüm satırları listeleyen metot //select * from tbl_araclar return $this->db->get($this->tableName)->result(); } public function Get($id) { //tek bir aracı listeleyen metot //SQL sorgusu: select * from tbl_araclar where arac_id=5 return $this->db->where("arac_id",$id)->get($this->tableName)->row(); } } ?> ``` I will really pleased for helping.
PHP CodeIGniter 3
|php|codeigniter|continuous-integration|codeigniter-3|
null
i am currently using odoo 16, where i have created a custom field and trying to make that field readonly. The readonly field should appear only if a non-administrator user is logged in This is my initial code:- ``` <odoo> <data> <record id="product_template_only_form_view" model="ir.ui.view"> <field name="name">product.template.product.form</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_only_form_view"/> <field name="arch" type="xml"> <xpath expr="//group[@name='group_general']" position="inside"> <field name="recommanded_price" string="Recommended Price" readonly="1" groups="base.group_system"/> </xpath> </field> </record> </data> </odoo> ``` What i tried? ``` <odoo> <data> <record id="product_template_only_form_view" model="ir.ui.view"> <field name="name">product.template.product.form</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_only_form_view"/> <field name="arch" type="xml"> <xpath expr="//group[@name='group_general']" position="inside"> <field name="recommanded_price" string="Recommended Price" attrs="{'readonly': [('groups', 'not in', ['base.group_system'])]}" /> </xpath> </field> </record> </data> </odoo> ```
When you use something like `ProvidePlugin` in the front-end world you are, by all intents and purposes from a semantic and useability perspective, providing a global. It suffers from almost all the disadvantages using a global does. You gained pretty much nothing from a code quality point of view over a global. If you need/want a global, use one. If you *really* want this then that is the first-class language support that exists to solve this problem. You could probably use custom node ESM loaders to achieve it, but as above, that's imitating global semantics. You could also start compiling your JS code with something like babel. Which on the back end, is like cracking a nut with a sledgehammer. I would recommend against this way of thinking entirely (sorry if this sounds like a party pooper!). Removing the burden of having the `import` lines might seem attractive, but actually, If I were a new dev in this codebase I would wonder what on earth is happening. It breaks developers' expectations. The consistency and expectation of that consistency massively outweigh any saving by not having that line, which is usually inserted seamlessly & automatically as a matter of course by the IDE anyway. You therefore pretty much never see tricks like this in node code. There's little reason to run a bundler-like tool like webpack on back-end code, or even transpilers, outside of TypeScript (which isn't general purpose enough as a standalone build tool to solve this type of problem, but [ts-morph][1] is noteworthy). So certain expectations have arisen from that mindset. Not to mention that if you went down the "inject" route it will play havoc with linting tools, IDE IntelliSense and other static analysis tooling. It also makes human understanding of scope more ambiguous. If another variable happens to use the same name (say, a function param), it becomes unclear what is being referenced, to both humans and most of all to the IDE. Sure this may have a hard and direct answer, but this isn't the kind of thing you introduce willingly. Even worse, you'd then have to ensure compatibility with other software like testing tools, or even develop parallel solutions for those tools just to support this Frankenstein setup. Suddenly there's more code & complexity to support not just doing the `import` than if the `import` was there anyway. It's also a possible sign of a code architecture problem. If you need this *everywhere* then perhaps that is a sign that actually, this thing should be attached to certain prototypes/classes at a deeper level (so in core places) such that is available locally. For example, classes/objects may provide a `this.logger` or something along those lines. This also promotes better decoupling. [1]: https://ts-morph.com/
i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. ``` import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: MyHomePage(), ); } } class MyHomePage extends StatefulWidget { @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { double blueContainerX = 50; double blueContainerY = 50; bool showRedContainer = false; double redContainerX = 50; double redContainerY = 50; double blueContainerWidth = 200; double blueContainerHeight = 200; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Container Demo'), ), body: Center( child: Stack( children: [ // Red Container (Fixed or Positioned based on showRedContainer) if (showRedContainer) Positioned( left: redContainerX, top: redContainerY, child: Container( width: 50, height: 50, decoration: BoxDecoration( border: Border.all( color: Colors.red, width: 2.0, // Border width ), ), ), ), // Blue Container (Draggable) Positioned( left: blueContainerX, top: blueContainerY, child: GestureDetector( onDoubleTap: () { setState(() { if (!showRedContainer) { // Set red container's position only when it's double-tapped showRedContainer = true; redContainerX = blueContainerX; redContainerY = blueContainerY; } }); }, onPanUpdate: (details) { setState(() { // Update blue container position blueContainerX += details.delta.dx; blueContainerY += details.delta.dy; // Limit blue container movement within screen bounds blueContainerX = blueContainerX.clamp(0.0, MediaQuery.of(context).size.width - blueContainerWidth); blueContainerY = blueContainerY.clamp(0.0, MediaQuery.of(context).size.height - blueContainerHeight); // Stop blue container movement when red container borders align if ((blueContainerX + blueContainerWidth == redContainerX + 50 && details.delta.dx > 0) || (blueContainerY + blueContainerHeight == redContainerY + 50 && details.delta.dy > 0)) { blueContainerX = blueContainerX; blueContainerY = blueContainerY; } }); }, child: Container( width: blueContainerWidth, height: blueContainerHeight, decoration: BoxDecoration( border: Border.all( color: Colors.blue, width: 2.0, // Border width ), ), ), ), ), ], ), ), ); } } ``` i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. [![enter image description here](https://i.stack.imgur.com/rNECn.jpg)](https://i.stack.imgur.com/rNECn.jpg) [![enter image description here](https://i.stack.imgur.com/oo6il.jpg)](https://i.stack.imgur.com/oo6il.jpg) i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter.
How to container move around the small container in fluttter
|flutter|dart|flutter-container|
null
The `onKeyUp` event fires when the [key is released][1]. The fact that you see a lowercase in the event handler indicates that at the time you ***released*** the key you no longer had the shift key pressed. The fact that you see a capital letter appear in the input is a separate behavior. This is based on the key (or combination of keys) that were pressed which can be determined by the `onKeyDown` handler. So even though it feels like you released the shift key before pressing the next character, the shift key's `onkeyup` event did not fire before the character's `onkeydown` event so that leads to a capital letter. To confirm this, you can change your event handler to `onKeyDown` and see that a capital letter is logged always matching what the input displays [1]: https://www.w3schools.com/jsref/event_onkeyup.asp
I was looking at the same situation. I wanted to have a data/column header in first raw of my output datafile, even if data is being appended. So only if file doesn't exist, I will print the first line as column headers. I just opened the file in read option ("r"), and checked if it returned NULL. If NULL I will go forward printing the column header, and continue to append data. Is there any problem with this approach? fp = fopen(file,"r"); if(!fp) { fp = fopen(file,"a"); fprintf(fp,"#col-1\tcol-2\tcol-3\n"); } else { fclose(fp); fp = fopen(file,"a"); } fprintf(fp,"%d,%d,%d\n",val1,val2,val3);
I have been working on a project where I am calling getProductList API which can have hundreds or thousands on data, I choose to use pagination (`infinite_scroll_pagination: ^4.0.0`). I tried apply it came up with the code mentioned below. the problem is that my function `productsPaginationKey.addPageRequestListener((pageKey) {...});` in the `OnInit()`. What I want is that only 12 products load when screen is called and then user opens the dropdown and scroll for more that it load more 12 than so on... Please guide me what wrong I am doing and how can I fix the following issue. I am looking for a solution as well a explanation so that I can understand the problem. Thanks. Controller Class: class SalesByCustomerController extends GetxController with StateMixin { late final PagingController<int, ProductModel> productsPaginationKey; late String searchQuery; late String sortQuery; late final GlobalKey<ScaffoldState> scaffoldKey; List<Map<String, dynamic>> allProducts = []; late TextEditingController productFilterController; List<String> productFilters = ['All']; String? productFilter; void onProductFilterChange(String? selected) { productFilter = selected; productFilterController.text = selected!; } Future<void> getProductList(int pageKey, String filter, String sort) async { try { List<ProductModel> newItems = await BaseClient.safeApiCall( ApiConstants.GET_PRODUCT_LIST, RequestType.get, headers: await BaseClient.generateHeaders(), queryParameters: { '\$count': true, '\$skip': pageKey * ApiConstants.ITEM_COUNT, '\$top': ApiConstants.ITEM_COUNT, if (filter.isNotEmpty) '\$filter': filter, if (sort.isNotEmpty) '\$orderby': sort, }, onSuccess: (json) { List<ProductModel>? products; if (json.data['value'] != null) { products = []; json.data['value'].forEach((v) { products?.add(ProductModel.fromJson(v)); }); } return products ?? []; }, onError: (e) { throw Exception('Failed to fetch products'); }, ); final isLastPage = newItems.length < ApiConstants.ITEM_COUNT; if (isLastPage) { productsPaginationKey.appendLastPage(newItems); } else { final nextPageKey = pageKey + 1; // Increment page number productsPaginationKey.appendPage(newItems, nextPageKey); } for (ProductModel product in newItems) { if (product.productName != null) { allProducts.add({'name': product.productName, 'id': product.productId}); productFilters.add(product.productName!); } } } catch (error) { productsPaginationKey.error = error; } } @override void onInit() { scaffoldKey = GlobalKey<ScaffoldState>(); searchQuery = ""; sortQuery = "productName asc"; productsPaginationKey = PagingController(firstPageKey: 0); productsPaginationKey.addPageRequestListener((pageKey) { getProductList(pageKey, searchQuery, sortQuery); }); super.onInit(); } @override void dispose() { productsPaginationKey.dispose(); super.dispose(); } } View Class: class SalesByCustomer extends GetView<SalesByCustomerController> { static const String routeName = '/salesByCustomer'; const SalesByCustomer({super.key}); @override Widget build(BuildContext context) { return PageTemplate( overscroll: false, showThemeButton: false, showMenuButton: true, scaffoldKey: controller.scaffoldKey, padding: Sizes.PADDING_12, children: [ ... _buildFilterBar().sliverToBoxAdapter, ... ], ).scaffoldWithDrawer(); } Widget _buildFilterBar() { return Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ Row( children: [ ... Expanded( child: CustomDropDownField<String>( controller: controller.productFilterController, showSearchBox: true, title: 'Product', prefixIcon: Icons.inventory, onChange: controller.onProductFilterChange, items: controller.productFilters, selectedItem: controller.productFilter, verticalPadding: 10, ), ), ], ), ], ); } }
How to implement pagination on the custom dropdown item's in flutter
|flutter|api|dart|pagination|
I would like to create a method that returns an iframe's URL if the parent page is allowed to (because it is under the same domain), and return a falsy value if the parent page is not allowed to do so. I have the following code hosted by https://cjshayward.com: <!DOCTYPE html> <html> <head> </head> <body> <iframe name="test_frame" id="test_frame" src="https://cjshayward.com"></iframe> <script> setTimeout(function() { try { console.log(window.frames.testframe.location.href); } catch(error) { console.log('Not able to load.'); }); }, 1000); </script> </body> </html> The `setTimeout()` is used because if I specify an iframe and then immediately have JavaScript attempt to load the page's URL, it will load a URL of `about:blank`. But when I change the URL to the script in the same place to another domain (that has no restrictions on e.g. loading contents in a frame), Firefox logs the string specified in the catch block, and still logs an uncaught DOMException: <!DOCTYPE html> <html> <head> </head> <body> <iframe name="test_frame" id="test_frame" src="https://orthodoxchurchfathers.com"></iframe> <script> setTimeout(function() { try { console.log(window.frames.testframe.location.href); } catch(error) { console.log('Not able to load.'); }); }, 1000); </script> </body> </html> In the first case, it logs https://cjshayward.com. In the second case, it logs "Not able to load," but still shows an uncaught DOMException in the log. How can I make a function that returns the iframe's location.href if it is available, and returns something falsy like an empty string if the same is not available due to browser security restrictions? Or, taking a slight step back, is the logged DOMException unimportant? If I replace the console.log() calls with return calls from within a function, will my code be able to use the return value as the iframe's URL if such is available and a falsy value if it is not available? TIA, --UPDATE-- I have the present revised version of the test program: <!DOCTYPE html> <html> <head> </head> <body> <iframe name="test_frame" id="test_frame" src="https://orthodoxchurchfathers.com"></iframe> <script> window.onerror = (e) => { e.preventDefault(); console.log('#' + e + '#'); }; setTimeout(function() { try { console.log(window.frames.binary.location.href); } catch(e) { console.log('Not able to load.'); } }, 1000); </script> </body> </html> It logs "Not able to load," but it displays an uncaught DOMException. It does not log any line between hashes.
|google-apps-script|
null
According to the network diagram above, the connection from company A's pc0 to the server system including gmail and facebook is successful, but company B's pc6 and pc7 cannot ping the server system. I tried to give ip from dns but it didn't work. The system works normally, all devices are connected successfully, can send emails from the pc to the server
{"OriginalQuestionIds":[23088439],"Voters":[{"Id":10248678,"DisplayName":"oguz ismail","BindingReason":{"GoldTagBadge":"bash"}}]}
**Goal**: I'm trying to set up a telegram bot, which downloads pictures that are sent to it. After doing some manual playing with ```requests```, I'd rather like to _use the python-telegram-bot_ library. Yet unfortunately most examples outside the docs are written for some Version <20, the official docs are purely written for Versions >20 and unfortunately never cover this topic to an extend which allowed me to write working code. _I dont want to downgrade the version of my lib, I want to understand the current Version (v21.x.x)._ My relevant **current code** looks like this: ```python from dotenv import load_dotenv import os import asyncio import telegram from telegram import Update from telegram.ext import Application, CommandHandler, MessageHandler, filters, ContextTypes, ApplicationBuilder from telegram.ext import InlineQueryHandler, ExtBot load_dotenv() TOKEN = os.getenv('TOKEN') ### **Downloader** async def downloader(update: Update, context: ContextTypes.DEFAULT_TYPE): logging.DEBUG("in downloader app!") new_file = await update.message.effective_attachment[-1].get_file() # [-1] for largest size of photo file = await new_file.download_to_drive() # file2 = await ExtBot.get_file() return file ### **Main** if __name__ == '__main__': application = ApplicationBuilder().token(TOKEN).build() downloader_handler = MessageHandler(filters.Document.IMAGE, downloader) #**here are the filters**; application.add_handler(downloader_handler) #file_handler = MessageHandler(filters.Document.JPG, downloader) # another try, also Document.ALL #application.add_handler(file_handler) application.run_polling() ``` My observations so far: - when filters set for Document.ALL and i send a pdf-File, it seems to enter the downloader, yet the pdf I'm not able to retrieve the downloaded pdf everytime (it seems its sometimes saving it to a path which is unknown to me) - when i send an image (jpg), it seems, the downloader is never entered (no debug/print statements) - I was running in all heaps of Errors, when trying to replace code with snippets from other posts, as it seems most rely on v>20-way of writing this. To me it seems, this shouldn't be that much hustle... --- Any help would be highly apprechiated. I don't think, I will be the only one struggling with this after the major overhaul of the library. Thanks in advance and happy coding to everyone :) I've tried various stackoverflow-Posts related to this, from all I've read, [this example snippet, specifically designed with v20 syntax](https://stackoverflow.com/questions/62253718/how-can-i-receive-file-in-python-telegram-bot) should work. It does for pdfs (see above weird behaviour), but it doesn't for images, which leads me to believe it is some sort of problem related to filters. **Filters I've tried**: - `filters.Document.IMAGE` - `filters.Document.JPG` - `filters.Document.ALL` **Questions**: - which would be the appropriate filter that should work for jpgs? - What else am I maybe missing here?
{"Voters":[{"Id":2370483,"DisplayName":"Machavity"}],"SiteSpecificCloseReasonIds":[16]}
I am learning Spark, so as a task we had to create a wheel locally and later install it in Databricks (I am using Azure Databricks), and test it by running it from a Databrick Notebook. This program involves reading a CSV file (timezones.csv) included inside the wheel file. The file *is* inside the wheel (I checked it) and also the wheel works properly when I install it and run it from a local PC Jupyter Notebook. However, when I install it in Databricks Notebook it gives this error, as you can see below in the snapshot: ```lang-py [PATH_NOT_FOUND] Path does not exist: dbfs:/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/resources/timezones.csv. SQLSTATE: 42K03 File <command-3771510969632751>, line 7 3 from pyspark.sql import SparkSession 5 spark = SparkSession.builder.getOrCreate() ----> 7 flights_with_utc = aniade_hora_utc(spark, flights_df) File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/agregaciones.py:25, in aniade_hora_utc(spark, df) 23 path_timezones = str(Path(__file__).parent) + "/resources/timezones.csv" 24 #path_timezones = str(Path("resources") / "timezones.csv") ---> 25 timezones_df = spark.read.options(header="true", inferSchema="true").csv(path_timezones) 27 # Concateno los datos de las columnas del timezones_df ("iata_code","iana_tz","windows_tz"), a la derecha de 28 # las columnas del df original, copiando solo en las filas donde coincida el aeropuerto de origen (Origin) con 29 # el valor de la columna iata_code de timezones.df. Si algun aeropuerto de Origin no apareciera en timezones_df, 30 # las 3 columnas quedarán con valor nulo (NULL) 32 df_with_tz = df.join(timezones_df, df["Origin"] == timezones_df["iata_code"], "left_outer") File /databricks/spark/python/pyspark/instrumentation_utils.py:47, in _wrap_function.<locals>.wrapper(*args, **kwargs) 45 start = time.perf_counter() 46 try: ---> 47 res = func(*args, **kwargs) 48 logger.log_success( 49 module_name, class_name, function_name, time.perf_counter() - start, signature 50 ) 51 return res File /databricks/spark/python/pyspark/sql/readwriter.py:830, in DataFrameReader.csv(self, path, schema, sep, encoding, quote, escape, comment, header, inferSchema, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, nullValue, nanValue, positiveInf, negativeInf, dateFormat, timestampFormat, maxColumns, maxCharsPerColumn, maxMalformedLogPerPartition, mode, columnNameOfCorruptRecord, multiLine, charToEscapeQuoteEscaping, samplingRatio, enforceSchema, emptyValue, locale, lineSep, pathGlobFilter, recursiveFileLookup, modifiedBefore, modifiedAfter, unescapedQuoteHandling) 828 if type(path) == list: 829 assert self._spark._sc._jvm is not None --> 830 return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path))) 831 elif isinstance(path, RDD): 833 def func(iterator): File /databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args) 1316 command = proto.CALL_COMMAND_NAME +\ 1317 self.command_header +\ 1318 args_command +\ 1319 proto.END_COMMAND_PART 1321 answer = self.gateway_client.send_command(command) -> 1322 return_value = get_return_value( 1323 answer, self.gateway_client, self.target_id, self.name) 1325 for temp_arg in temp_args: 1326 if hasattr(temp_arg, "_detach"): File /databricks/spark/python/pyspark/errors/exceptions/captured.py:230, in capture_sql_exception.<locals>.deco(*a, **kw) 226 converted = convert_exception(e.java_exception) 227 if not isinstance(converted, UnknownException): 228 # Hide where the exception came from that shows a non-Pythonic 229 # JVM exception message. --> 230 raise converted from None 231 else: 232 raise ``` Databricks Error Snapshot 1: ![Databricks Error Snapshot 1](https://i.stack.imgur.com/e3QPP.png) Databricks Error Snapshot 2: ![Databricks Error Snapshot 2](https://i.stack.imgur.com/r98sN.png) Has anyone experienced this problem before? Is there any solution? I tried installing the file both with pip and from Library, and I got the same error, also rebooted the cluster several times. Thanks in advance for your help. I am using Python 3.11, Pyspark 3.5 and Java 8 and created the wheel locally from PyCharm. If you need more details to answer, just ask and I'll provide them. I explained all the details above. I was expecting to be able to use the wheel I created locally from a Databricks Notebook. Sorry about my English is not my native tongue and I am a bit rusty. ----- Edited to answer comment: > Can u navigate to %sh ls > /dbfs/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/resources and share the folder content as image? – Samuel Demir 11 hours ago I just did what you asked for and I got this result (The file is actually there even if Databricks says it can't find it... [Snapshot of suggestion result][1] Edited to answer Samuel Demir: > Maybe the `package_data` is missing in your setup initialization?! > > This what I do to add my configuration files in the res folder to the > wheel. The files are then accessible exactly by the same piece of code > of yours. > > > setup( > name="daprep", > version=__version__, > author="", > author_email="samuel.demir@galliker.com", > description="A short summary of the project", > license="proprietary", > url="", > packages=find_packages("src"), > package_dir={"": "src"}, > package_data={"daprep": ["res/**/*"]}, > long_description=read("README.md"), > install_requires=read_requirements(Path("requirements.txt")), > tests_require=[ > "pytest", > "pytest-cov", > "pre-commit", > ], > cmdclass={ > "dist": DistCommand, > "test": TestCommand, > "testcov": TestCovCommand, > }, > platforms="any", > python_requires=">=3.7", > entry_points={ > "console_scripts": [ > "main_entrypoint = daprep.main:main_entrypoint", > ] > }, ) My `setup.py` file has the package _data line as well, here you can see an snapshot and the code for it. Do you catch any other detail that could be relevant there? Thanks in advance. [Snapshot of my setup.py file][2] from setuptools import setup, find_packages setup( name="motor-ingesta", version="0.1.0", author="Estrella Adriana Sicardi Segade", author_email="esicardi@ucm.es", description="Motor de ingesta para el curso de Spark", long_description="Motor de ingesta para el curso de Spark", long_description_content_type="text/markdown", url="https://github.com/esicardi", python_requires=">=3.8", packages=find_packages(), package_data={"motor_ingesta": ["resources/*.csv"]}) [1]: https://i.stack.imgur.com/SnJ6M.png [2]: https://i.stack.imgur.com/epApF.png Edited to answer questions: > Are you able to load the csv file by simply read it in a notebook > cell? – Samuel Demir 17 hours ago Yes, I am able to read the file from outside the code. For example I can print the content of the file: %sh cat /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/resources/timezones.csv "iata_code","iana_tz","windows_tz" "AAA","Pacific/Tahiti","Hawaiian Standard Time" "AAB","Australia/Brisbane","E. Australia Standard Time" "AAC","Africa/Cairo","Egypt Standard Time" "AAD","Africa/Mogadishu","E. Africa Standard Time" "AAE","Africa/Algiers","W. Central Africa Standard Time" "AAF","America/New_York","Eastern Standard Time" "AAG","America/Sao_Paulo","E. South America Standard Time" "AAH","Europe/Berlin","W. Europe Standard Time" "AAI","America/Araguaina","Tocantins Standard Time" "AAJ","America/Paramaribo","SA Eastern Standard Time" "AAK","Pacific/Tarawa","UTC+12" "AAL","Europe/Copenhagen","Romance Standard Time" "AAM","Africa/Johannesburg","South Africa Standard Time" "AAN","Asia/Dubai","Arabian Standard Time" "AAO","America/Caracas","Venezuela Standard Time" "AAP","Asia/Makassar","Singapore Standard Time" "AAQ","Europe/Moscow","Russian Standard Time" "AAR","Europe/Copenhagen","Romance Standard Time" "AAS","Asia/Jayapura","Tokyo Standard Time" ... [file continues on until the end] > Are u able to share a part of your codebase to reconstruct the > problem? – Samuel Demir 17 hours ago Regarding the package (called motor_ingesta), it is composed by three py files, motor_ingesta.py, agregaciones.py and flujo_diario.py The package ingest and process some JSON files with info about flights in USA airports during January 2023. The function that reads timezones.csv in defined inside agregaciones.py. The function takes a Dataframe (this function is used to process dataframes ingested previouslt from JSON files), and it joins it with the data from timezones.csv, by matching airport codes. Later it uses that info to calcultate UTC time from Departure time (DepTime) and UTC zones. It adds one more column to the right of Dataframe, "FlightTime" containing UTC time, and later it drops the colums it added from timezones.csv. Here is the code for the function: def aniade_hora_utc(spark: SparkSession, df: DF) -> DF: """ Añade la columna FlightTime en formato UTC al DataFrame de vuelos. :param spark: Sesión de Spark. :param df: DataFrame de vuelos. :return: DataFrame de vuelos con la columna FlightTime en formato UTC. """ path_timezones = str(Path(__file__).parent) + "/resources/timezones.csv" timezones_df = spark.read.options(header="true", inferSchema="true").csv(path_timezones) df_with_tz = df.join(timezones_df, df["Origin"] == timezones_df["iata_code"], "left_outer") df_with_flight_time = df_with_tz.withColumn("FlightTime", to_utc_timestamp(concat( col("FlightDate"), lit(" "), lpad(col("DepTime").cast("string"), 4, "0").substr(1, 2), lit(":"), col("DepTime").cast("string").substr(-2, 2) ), col("iana_tz"))) df_with_flight_time = df_with_flight_time.drop("iata_code", "iana_tz", "windows_tz") return df_with_flight_time
I'm working on a React Native app that displays PDFs using a custom NativeModule written in Swift. The module successfully renders PDFs from a provided file URL. But, I'm struggling with a feature to highlight specific text within the PDF. I'm passing text to highlight, startOffset and endOffset, and Page No. from **React Native** to the **NativeModule** using the Bridge. The offsets are obtained from page.string, and due to which offsets might not always translate accurately to rectangle selection due to variations in font styles and layouts:\n I tried the below approach by searching from various platforms, but did not helped me. ``` @objc func highlightText(_ pageNumber: Int, startOffset: Int, endOffset: Int, searchText: String, changeType: String) { guard let page = pdfView.document?.page(at: pageNumber - 1) else { print("Failed to get PDF page for page \(pageNumber)") return } // Create a PDFSelection based on the offsets let startIndex = searchText.index(searchText.startIndex, offsetBy: startOffset) let endIndex = searchText.index(searchText.startIndex, offsetBy: endOffset) let range = NSRange(startIndex..<endIndex, in: searchText) let selection = page.selection(for: range)! // Get the bounding box of the selection let boundingBox = selection.bounds(for: page) // Create a highlight annotation and add it to the page let highlight = PDFAnnotation(bounds: boundingBox, forType: .highlight, withProperties: nil) highlight.color = UIColor.yellow.withAlphaComponent(0.5) // Adjust highlight color and opacity page.addAnnotation(highlight) } ``` **I tried the below approaches:** 1. Tried something like **document.findString**, but it takes **time** and **memory**, and sometimes kill the App. 2. Tried getting **CGRect** from **offsets**, but can't get these also. I want a solution, where I just wanted to highlight the text on the Page Number.
Search and highlight text of current text in PDFKit Swift
|swift|react-native|pdfkit|ios-pdfkit|apple-pdfkit|
null
You can [choose which modules to export to the external code](https://stackoverflow.com/a/78251561/8138591). Additionally, u can *clearly* communicate the **intention** that a code exported from a file should only be used inside a package and not outside of it (**package scoped**). My solution is based on the idea from another programming language. According to the [convention](https://dart.dev/language) adopted by the Dart programming language created by Google: > Unlike Java, Dart doesn't have the keywords public, protected, and private. If an identifier starts with an underscore (_), it's private to its library. So for a file that *only* exports **package-private code**, its name should be prefixed with an underscore `_`, for example `_packagePrivate.js`. *All* the **named exports** from this file (e.g., constants, functions) should also be prefixed with an underscore `_` too. **JSDoc** has a special block tag [`@package`](https://jsdoc.app/tags-package): > This symbol is meant to be **package-private**. Typically, this tag indicates that a symbol is available only to code in the same directory as the source file for this symbol. The `@package` tag in JSDoc is the same as `@access package` (see [`@access`](https://jsdoc.app/tags-access)). **TSDoc** provides a modifier tag [`@internal`](https://tsdoc.org/pages/tags/internal/): > Designates that an API item is not planned to be used by third-party developers. The tooling may trim the declaration from a public release. In some implementations, certain designated packages may be allowed to consume **internal API items**, e.g. because the packages are components of the same product. So as a result, the `_packagePrivate.ts` might look like this: ```typescript /** * @package * @internal */ export function _doSomething(): void {} const _apiKey = '...'; /** * @package * @internal */ export default _apiKey; ``` According to the [JSDoc example for ES 2015 Modules](https://jsdoc.app/howto-es2015-modules) > In most cases, you can simply add a JSDoc comment to the export statement that defines the exported value. If you are exporting a value under another name, you can document the exported value within its export block.
I have a small C++ program that writes a PWM period value to the following file on the BeagleBone Black: /sys/class/pwm/pwmchip5/pwm0/period The first time the program runs the file write succeeds. The file is opened with the O_WRONLY and O_NONBLOCK flags. The program closes the file descriptor before exiting. When I invoke the program a 2nd time, the write fails. I tried to write a value into the file afterwards with 'echo' on the command line but that fails too. Is the file somehow being locked after the first run of the program?
Linux file write 1st time errors on 2nd try
|c++|linux|beagleboneblack|
I am having this error when I am installing yfinance via pip. Any solution on that? kevin@KevindeMacBook-Air test % pip install yfinance --upgrade --no-cache-dir Collecting yfinance Downloading yfinance-0.2.37-py2.py3-none-any.whl.metadata (11 kB) Requirement already satisfied: pandas>=1.3.0 in ./venv/lib/python3.9/site-packages (from yfinance) (1.5.3) Requirement already satisfied: numpy>=1.16.5 in ./venv/lib/python3.9/site-packages (from yfinance) (1.24.2) Requirement already satisfied: requests>=2.31 in ./venv/lib/python3.9/site-packages (from yfinance) (2.31.0) Collecting multitasking>=0.0.7 (from yfinance) Downloading multitasking-0.0.11-py3-none-any.whl.metadata (5.5 kB)Collecting lxml>=4.9.1 (from yfinance) Downloading lxml-5.1.1-cp39-cp39-macosx_10_9_x86_64.whl.metadata (3.5 kB)Collecting appdirs>=1.4.4 (from yfinance) Downloading appdirs-1.4.4-py2.py3-none-any.whl.metadata (9.0 kB)Requirement already satisfied: pytz>=2022.5 in ./venv/lib/python3.9/site-packages (from yfinance) (2022.7.1)Collecting frozendict>=2.3.4 (from yfinance) Downloading frozendict-2.4.0-cp39-cp39-macosx_10_9_x86_64.whl.metadata (23 kB)Collecting peewee>=3.16.2 (from yfinance) Downloading peewee-3.17.1.tar.gz (3.0 MB)3.0/3.0 MB 25.5 MB/s eta 0:00:00Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully.exit code: 1[32 lines of output]Traceback (most recent call last):File "/Users/kevin/PycharmProjects/test/venv/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/Users/kevin/PycharmProjects/test/venv/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs'])File "/Users/kevin/PycharmProjects/test/venv/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires self.run_setup()File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 487, in run_setup super().run_setup(setup_script=setup_script) File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 311, in run_setup exec(code, locals()) File "<string>", line 109, in <module>File "<string>", line 86, in _have_sqlite_extension_support File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/_distutils/ccompiler.py", line 600, in compile self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/_distutils/unixccompiler.py", line 185, in _compile self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs) File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/_distutils/ccompiler.py", line 1041, in spawn spawn(cmd, dry_run=self.dry_run, **kwargs) File "/private/var/folders/6f/dx1v1ksd39732901t1pjp3dw0000gn/T/pip-build-env-yb2xybts/overlay/lib/python3.9/site-packages/setuptools/_distutils/spawn.py", line 57, in spawn proc = subprocess.Popen(cmd, env=env)File "/usr/local/Cellar/python@3.9/3.9.1_8/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 947, in initself._execute_child(args, executable, preexec_fn, close_fds, File "/usr/local/Cellar/python@3.9/3.9.1_8/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 1739, in _execute_child env_list.append(k + b'=' + os.fsencode(v)) File "/usr/local/Cellar/python@3.9/3.9.1_8/Frameworks/Python.framework/Versions/3.9/lib/python3.9/os.py", line 810, in fsencode filename = fspath(filename) # Does type-checking of filename.TypeError: expected str, bytes or os.PathLike object, not int[end of output] note: This error originates from a subprocess, and is likely not a problem with pip.error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully.exit code: 1See above for output. yfinance version - UNABLE TO INSTALL ANY yfinance version python version: Python 3.9.1 on Mac I can install other libraries, but not yfinance.
I need some brainstorming on this issue. So the problem is I have a list of urls and they are all news articles. I need to scrape the published date of the news articles. The problem is that only a few of them articles have a date written in date tag in html and most of them has date written in different tags. Now I am unable to find a generic approach to get date from all the urls because they all have dates written in different tags. A few examples of published dates of the articles: <div style="vertical-align: bottom; float:left; width:45%;"><b>As on: June 19, 2023 </b><br><br><br></div> <ul> <li>Updated Mar 14, 2024, 7:25 AM IST</li> </ul> How can I identify these dates? One solution can be that I use regex to get dates but it will fetch dates written in the article as well. It won't be able to distinguish between the published date and some random date written inside of the article.
Scrape date from news articles
|python|web-scraping|
null
{"OriginalQuestionIds":[10671916],"Voters":[{"Id":11107541,"DisplayName":"starball","BindingReason":{"GoldTagBadge":"visual-studio-code"}}]}
I am new to Linux and I was intending to totally change my OS environment to Linux distros. I chose Ubuntu and downloaded 22.04: Jammy. Viber is the main communication channel of my work and I seriously needed it. I have been trying a few days for it to work on my Ubuntu. Still, no hope. So, could you please help me with it? I am willing to try any way possible except reinstalling Windows OS on my PC. I just need to make it work. Thank you so much. **Things I have Tried** I have tried installing the "viber-unofficial" one from the snap. The app opens up but when I tried to connect, it just says my phone number will be used as the ID and I got no authentication code and cannot continue at all. I reached out to Viber support and they responded that they have received no authentication code request from my server. However, they said they have reset it for me and try again. I did again and nothing. I also tried the Image installation (sorry, I don't really know about it in depth) I also tried installing Viber for Ubuntu from the official Viber website (the .deb). I could install but afterwards, it doesn't even launch the app. I tried it multiple times. The next workaround, I tried using Wine with the Viber.exe (yes, Viber for Windows) using bottles. Nothing happened. I couldn't even install. My wine version was "wine-6.0.3" I even been finding many sources and read through many discussions but most of them are at least quite a few years back. Some of the links they provided showed "error 404" and the problem remain unsolved.
I have to click on the button `Accepter et continuer` in this `div`, I copied the XPATH of the line on which is present the button I want to click, but it does not work <div class="disclaimer-buttons-place-holder"> <a class="btn btn-reverse" href="WEBSITE" data-event-zone="click" data-event="C" data-event-label="action::disclaimer::exit" data-event-type="A"> Refuser</a> <a class="btn" id="validateDisclaimer" href="/fr_part" data-event-zone="click" data-event="C" data-event-label="action::disclaimer::validate" data-event-type="A"> ccepter et continuer</a> </div> do you know how to click exactly on the `Accepter et continuer` button using XPATH? I tried this : ```python driver.find_element(By.XPATH, "/html/body/div[1]/div/aside/div[2]/div/div/div[3]/div[2]/a[2]").click() ``` Thank you so much!
I am trying to download file on lick of notification button but when application is in background then api call is not working. Notification Code: func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -> Void) { let userInfo = response.notification.request.content.userInfo as! [String: AnyObject] switch response.actionIdentifier { case "DownloadDutyPlanAction": print("Download tap") downloadDutyPlanFromNotification(fetchCompletionHandler: nil) completionHandler() case "CancelAction": print("Cancel tap") default: completionHandler() } } Network class: private var task: URLSessionTask? private var session = URLSession.init(configuration: URLSessionConfiguration.default, delegate: NSURLSessionPinningDelegate(), delegateQueue: nil) func request(forDownloadUrl route: EndPoint, completion: @escaping NetworkDownloadRouterCompletion) { do { let request = try self.buildRequest(from: route) task = session.downloadTask(with: request) { localURL, response, error in if let tempLocation = localURL { completion(response, error, tempLocation) } else { completion(response, error, nil) } } } catch { completion(nil, error, nil) } self.task?.resume() }
Actionable notification api call not working in background
|ios|swift|push-notification|actionable-notification|
null
You need to check if the list of values includes `this.value` : $(document).ready(function() { $("#range-slider").mousemove(function() { if (["1", "2", "3", "4", "5"].includes(this.value)) { $('.other-element').addClass('is--active'); } else { $('.other-element').removeClass('is--active'); } }); });
I recently started to use GetX and my architectural opinions changed. GetX called its state managers - `controllers`. Therefore, we have an obvious architectural pattern choice - **MVC**. Yes `GetControllers` are not exactly **Controller** in MVC, but who cares? Ergo, next folder structure: controller \ user_controller.dart todo_controller.dart model \ user.dart user_service.dart //firebase connection todo.dart todo_service.dart view \ user_screen.dart todo_screen.dart widgets \ custom_button.dart If we have more than 3 features in our app then controller \ feature1 \ feature1_controller.dart ... featureN \ model \ feature1 \ feature1.dart feature1_service.dart //firebase connection ... featureN \ view \ feature1 \ feature1_screen.dart ... featureN \ widgets \ custom_button.dart This approach is called layer-first. An alternative feature-first approach should never be used. If the app is too big for the layer-first approach, it should be split into several packages. All the above is my humble opinion, obviously. In software development almost everything is trade-off.
Odoo 16 Make Fields Readonly Using XPath
|python|xml|odoo-16|
null
I have observed an issue while using the **Hyperband** algorithm in **Optuna**. According to the Hyperband algorithm, when **min_resources** = 5, **max_resources** = 20, and **reduction_factor** = 2, the search should start with an **initial space of 4** models for bracket **1**, with each model receiving **5** epochs in the first round. Subsequently, the number of models is reduced by a factor of **2** in each round and search space should also reduced by factor of **2** for next brackets i.e bracket 2 will have initial search space of **2** models, and the number of epochs for the remaining models is doubled in each subsequent round. so total models should be **11** is expected but it is training lot's of models. link of the article:- https://arxiv.org/pdf/1603.06560.pdf ``` import optuna import numpy as np # Toy dataset generation def generate_toy_dataset(): np.random.seed(0) X_train = np.random.rand(100, 10) y_train = np.random.randint(0, 2, size=(100,)) X_val = np.random.rand(20, 10) y_val = np.random.randint(0, 2, size=(20,)) return X_train, y_train, X_val, y_val X_train, y_train, X_val, y_val = generate_toy_dataset() # Model building function def build_model(trial): model = Sequential() model.add(Dense(units=trial.suggest_int('unit_input', 20, 30), activation='selu', input_shape=(X_train.shape[1],))) num_layers = trial.suggest_int('num_layers', 2, 3) for i in range(num_layers): units = trial.suggest_int(f'num_layer_{i}', 20, 30) activation = trial.suggest_categorical(f'activation_layer_{i}', ['relu', 'selu', 'tanh']) model.add(Dense(units=units, activation=activation)) if trial.suggest_categorical(f'dropout_layer_{i}', [True, False]): model.add(Dropout(rate=0.5)) model.add(Dense(1, activation='sigmoid')) optimizer_name = trial.suggest_categorical('optimizer', ['adam', 'rmsprop']) if optimizer_name == 'adam': optimizer = tf.keras.optimizers.Adam() else: optimizer = tf.keras.optimizers.RMSprop() model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy', tf.keras.metrics.AUC(name='val_auc')]) return model def objective(trial): model = build_model(trial) # Assuming you have your data prepared # Modify the fit method to include AUC metric history = model.fit(X_train, y_train, validation_data=(X_val, y_val), verbose=1) # Check if 'val_auc' is recorded auc_key = None for key in history.history.keys(): if key.startswith('val_auc'): auc_key = key break if auc_key is None: raise ValueError("AUC metric not found in history. Make sure it's being recorded during training.") # Report validation AUC for each model for i, auc_value in enumerate(history.history[auc_key]): trial.report(auc_value, step=i) # Check for pruning based on the intermediate AUC value if trial.should_prune(): raise optuna.TrialPruned() return max(history.history[auc_key]) # Optuna study creation study = optuna.create_study( direction='maximize', pruner=optuna.pruners.HyperbandPruner( min_resource=5, max_resource=20, reduction_factor=2 ) ) # Start optimization study.optimize(objective) ```
When I try to fetch from a query on a single table I get an associated array where Key is a name of column. But when I try to `JOIN` another table's columns on a query I get an array which only contains a key 'row' and a string value which looks like `(value1,value2,value3,...,valueN)`, without column names. How do I get associated arrays when I join data from 2+ tables? example of my query: ``` 'SELECT (item.id, item.category_id, item_category.letter, item_category.name_eng) FROM item LEFT JOIN item_category ON item_category.letter = item.category_id WHERE item.id = :id' ``` I get this from that query: ``` Array ( [row] => (2314,"B","B","Name") ) ``` I would like to get: ``` Array ( [id] => 2314, [category_id] => "B", [letter] => "B", [name_eng] => "Name" ) ``` I've tried to explode an output string with ',' separator, but it's a dumb idea cuz if i got ',' in table data I split it into different array elements
PHP fetchAll on JOIN
|php|sql|join|fetchall|
null
You've used `struct Square` where you should have used `struct square` in the `Box` definition. Corrected: ```c typedef struct box { struct square** squares; /* array of Squares */ // ^ int numbers; int possible[9]; int solvable; struct Box* next; } Box; ``` Another option is to `typedef` `Square` before `Box` and use `Square` in `Box`: ```c typedef struct square Square; typedef struct box { Square** squares; /* array of Squares */ int numbers; int possible[9]; int solvable; struct Box* next; } Box; ```
Shows Server is started at port 5500 but do not pop up the web browser. I tried every possible way to fix the problem but none helps out even gone through so many videos and tried but none worked. Please can any help me. your hints may help to resolve the problem.8
Live server extension in VS code works in background but do not pop the web browser
|html|visual-studio-code|web|path|settings|
null
<THIS IS NOT CODE, SO STOP COMPLAINING ABOUT INDENTING, EDITOR!> It would seem that a simple relationship exists between GPS and TG pages, given that some regions share numbering systems and some do not (Los Angeles + Orange Counties share, Riverside + San Bernardino Counties share but not with LA/OC, and so on). Since no one has an answer, I decided tht I wanted one, so I made one. Here's the trick: locate the very top-left of grid A1 of the first page in your particular county. Get the GPS for that point (I used Google Maps). Then calculate any two ponts on the same latitude line (it's helpful to find two streets on the same inner grid line), then two on the same longitude line. GPS both pairs, and calculate the distance (only the latitude distance for one pair, and only the longitude distance for the other pair). You will find that they are not the same (not a perfect square). I got: 0.03° latitude = 2.5 miles, or 0.01 miles per grid ↕ 0.03° longitude = 2.0 miles, or 0.01 miles per grid ↔ correct, they are not the same, and both values are heavily rounded) Now, from there, knowing each page is 9 x 7 grids {^[A-HJ][1-7]$}, do: page = (top-left/starting page) + ( 30 * int( y` / 7 ) ) + int( x` / 9 ) because pages stack on top of each other vertically in LA/OC by jumps of 30) grid = 'A' + int( x` % 9 ), 1 + ( int( y` ) % 7 ) Validation: Top of LA/OC = page OC 708-A1 (GPS point found) test location: Brea Mall (corner of Imperial Hwy & State College Blvd) map book look-up: OC 739-C1 program says: [DEBUG] [MAIN] distance_x(0.1):distance_y(-0.05) [DEBUG] [MAIN] distance_y(-0.05):grid_y(0):y(33.9):y_(-7) [DEBUG] [MAIN] distance_x(0.09):grid_x(2):x(-117.8):x_(11) [DEBUG] [MAIN] base[](grid_x 0 grid_y 0 lat 33.9 lon -117.9 page 708 scale_lat 0.01 scale_lon 0.01) [DEBUG] [MAIN] grid_x(2):grid_y(0):page(739) GPS location "brea mall (imp@scb nw)" 33.9 -117.8 calculates to Thomas Guide grid: 739-C1 I'm rounding my numbers because ... I don't want to give away EVERYTHING I worked hard to calculate by hand the last 48 hours, but if you do the look-ups on your favorite map website, the numbers will be OBVIOUS! Just know: you need to do the math at at least 5 digits of precision for accuracy. The fomula itself is obvious based on the grid layout of each page. Sadly, Rand McNally bought out and CLOSED Thomas Brothers maps (replacing them with their own product), but some states require first responders to still carry the Guides by law (including California), so you can still find a few editions ... for DOUBLE the price from the old days. If you can find an old Digital Edition online (circa 2004~2007), AND the serials to install them, AND a virtual machine to run them (they malfunction on Vista, and were discontinued shortly thereafter), you can see the numbers yourself.
For example, when the module includes a library that only works in the browser. Some times dynamic import can solve this issue. Take a look at the following example: import dynamic from 'next/dynamic' const DynamicComponentWithNoSSR = dynamic( () => import('../components/hello3'), { ssr: false } ) function Home() { return ( <div> <Header /> <DynamicComponentWithNoSSR /> <p>HOME PAGE is here!</p> </div> ) } export default Home Other useful references: - [Stack Overflow - Getting errors when rendering a component with dynamic in Next.js](https://stackoverflow.com/a/75370953/4756173) - [Next.js Docs - With no SSR](https://nextjs.org/docs/pages/building-your-application/optimizing/lazy-loading#with-no-ssr)
The training and validation accuracies are as follows: Epoch 1/5, Training Accuracy: 0.9442, Validation Accuracy: 0.7626, Time: 27.09 seconds Epoch 2/5, Training Accuracy: 0.9631, Validation Accuracy: 0.7518, Time: 28.14 seconds Epoch 3/5, Training Accuracy: 0.9757, Validation Accuracy: 0.7914, Time: 27.54 seconds Epoch 4/5, Training Accuracy: 0.9730, Validation Accuracy: 0.7698, Time: 27.30 seconds Epoch 5/5, Training Accuracy: 0.9865, Validation Accuracy: 0.7482, Time: 27.74 seconds It is a text based dataset of news headlines in English and Hindi. Size is 1680+ records. Model used is multilingual BERT with Adam optimizer. [Snapshot of the graph](https://i.stack.imgur.com/ze8GU.jpg) Is the model overfitting? If so, how can we improve the model? We were expecting a steep rise in the training accuracy and the validation accuracy curve to be close to it, not sure if this graph is optimal or not.
The training accuracy and the validation accuracy curves are almost parallel to each other. Is the model overfitting?
|machine-learning|nlp|bert-language-model|
null
I have a list of classes/objects and I need to get a random item from the list. In a particular case, I need to take a random item from a specific range OR a specific index. Meaning it's not just a Item item = List[Random.Next(2,10)]; //take one random item between index 2 and index 9 (to make an example) but I'd need something like "the random value must be between 2 and 9 (like in the example) OR be 15". I just need to add also the index 15 to the random range. Is there a simple way to do this without creating a new List that comprehends the range items and the other one? Always for that matter, is there a way to exclude some values from the base range (2,9)?
Random getting value from a range or a specific value
I wait until the page is loaded and use the "DOMContentLoaded" method to add eventhandlers on some p tags. This works until I open and close the modal, then the page/js freezes. Why does it freeze? ``` <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <div id="div_1"> <p>One</p> <p>Two</p> <p>Three</p> </div> <p id="open_modal">open the modal</p> <script> document.addEventListener("DOMContentLoaded", () => { let test = document.querySelector("#div_1") for (let i = 0; i < test.children.length; i++) { test.children[i].addEventListener("click", () => { console.log(test.children[i].innerHTML) }); }; }); document.querySelector("#open_modal").addEventListener("click", () => { if (!document.querySelector("#modal")) { document.body.innerHTML += ` <div id="modal" style="display: block; position: fixed; padding-top: 200px; left: 0; top: 0; width: 100%; height: 100%; background-color: rgba(0,0,0,0.4)"> <div id="modal_content"> <span id="modal_close">&times;</span> <p style="color: green;">Some text in the Modal..</p> </div> </div> `; document.querySelector("#modal_close").addEventListener("click", () => { document.querySelector("#modal").style.display = "none"; }); } else { document.querySelector("#modal").style.display = "block"; }; }); </script> </body> </html> ```
This is the pubspec.yaml. I defined the fonts that I am going to use and I copied the font file to fonts directory of my android project. ```yaml name: your_flutter_app description: A new Flutter project environment: sdk: ">=2.12.0 <3.0.0" dependencies: flutter: sdk: flutter math_expressions: ^2.0.0 # Add this line to include the math_expressions package flutter: fonts: - family: My-font fonts: - asset: fonts/Yarndings20Charted-regular.ttf ``` And this is my main.dart: ``` import 'package:flutter/material.dart'; void main() => runApp(MaterialApp( home: Scaffold ( appBar: AppBar( title: Text('my First App'), centerTitle: true, backgroundColor: Colors.amber.shade100, ), body: Center( child: Text( 'I am Haroon', style: TextStyle( fontSize: 20, color: Colors.green, fontFamily: 'My-font', ), ), ), floatingActionButton: FloatingActionButton( child: Text('click'), onPressed: () { //Something Here; }, backgroundColor: Colors.green, ), ), )); ``` I used pub.get and update dependencies. But none worked!
I have MySQL table similar to below table | id | Source | Destination | | -- | ------ | ----------- | | 1 | BLR | ATH | | 2 | ATH | BLR | | 3 | BLR | HKG | | 4 | HKG | BLR | | 5 | DEL | ATH | | 6 | ATH | DEL | | 7 | DEL | HKG | | 8 | HKG | DEL | | 9 | BLR | DEL | | 10 | DEL | BLR | | 11 | BLR | ATH | | 12 | ATH | BLR | | 13 | DEL | ATH | | 14 | ATH | DEL | I want to generate the combination of source and destination based on unique city, along with count like | CITY | ATH | BLR | DEL | HKG | | ---- | --- | --- | --- | --- | | ATH | 0 | 2 | 2 | 0 | | BLR | 2 | 0 | 1 | 1 | | DEL | 2 | 1 | 0 | 1 | | HKT | 0 | 1 | 1 | 0 | It is possible to generate the complete table using SQL query ? Thanks. I'm not sure how this combination is called in SQL. All the searches returned table to generate unique combination of source/destination but not a table like this. Not sure what this kind of table where row title and column title are same, is called. CSV for quick reference: id,source,destination 1,BLR,ATH 2,ATH,BLR 3,BLR,HKG 4,HKG,BLR 5,DEL,ATH 6,ATH,DEL 7,DEL,HKG 8,HKG,DEL 9,BLR,DEL 10,DEL,BLR 11,BLR,ATH 12,ATH,BLR 13,DEL,ATH 14,ATH,DEL
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, it has proven to be effective for adjusting the log levels for both the Proxy and the Watchdog components within my AppDynamics setup. I'd like to share the steps involved, but please proceed with caution and understand that this is not a sanctioned solution. I recommend changing only the log4j2.xml file, because the proxy messages look like are responsible for almost 99% of the log messages. Here's a summary of the steps: - **Proxy Log Level:** The `log4j2.xml` file controls this. You can find it within the appdynamics_bindeps module. For example, in my WSL setup, it's located at `/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml`. In the Docker image python:3.9, the path is `/usr/local/lib/python3.9/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml`. Modify the seven itens <AsyncLogger> level within the <Loggers> section to one of the following: debug, info, warn, error, or fatal. - **Watch Dog Log Level:** This can be adjusted in the `proxy.py` file found within the appdynamics Python module. For example, in my WSL setup, it's located at `/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics/scripts/pyagent/commands/proxy.py`. In the Docker image python:3.9, the path is `/usr/local/lib/python3.9/site-packages/appdynamics/scripts/pyagent/commands/proxy.py`. You will need to hardcode the log level in the configure_proxy_logger and configure_watchdog_logger functions by changing the level variable. ## My versions ```bash $ pip freeze | grep appdynamics appdynamics==24.2.0.6567 appdynamics-bindeps-linux-x64==24.2.0 appdynamics-proxysupport-linux-x64==11.68.3 ``` ## Original files ### log4j2.xml ``` <Loggers> <!-- Modify each <AsyncLogger> level as needed --> <AsyncLogger name="com.singularity" level="info" additivity="false"> <AppenderRef ref="Default"/> <AppenderRef ref="RESTAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> </Loggers> ``` ### proxy.py ```python def configure_proxy_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = logging.DEBUG if debug else logging.INFO pass def configure_watchdog_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = logging.DEBUG if debug else logging.INFO pass ``` ## My Script to create environment variables to log4j2.xml and proxy.py ### update_appdynamics_log_level.sh ```bash #!/bin/sh # Check if PYENV_ROOT is not set if [ -z "$PYENV_ROOT" ]; then # If PYENV_ROOT is not set, then set it to the default value export PYENV_ROOT="/usr/local/lib" echo "PYENV_ROOT was not set. Setting it to default: $PYENV_ROOT" else echo "PYENV_ROOT is already set to: $PYENV_ROOT" fi echo "=========================== log4j2 - appdynamics_bindeps module =========================" # Find the appdynamics_bindeps directory APP_APPD_BINDEPS_DIR=$(find "$PYENV_ROOT" -type d -name "appdynamics_bindeps" -print -quit) if [ -z "$APP_APPD_BINDEPS_DIR" ]; then echo "Error: appdynamics_bindeps directory not found." exit 1 fi echo "Found appdynamics_bindeps directory at $APP_APPD_BINDEPS_DIR" # Find the log4j2.xml file within the appdynamics_bindeps directory APP_LOG4J2_FILE=$(find "$APP_APPD_BINDEPS_DIR" -type f -name "log4j2.xml" -print -quit) if [ -z "$APP_LOG4J2_FILE" ]; then echo "Error: log4j2.xml file not found within the appdynamics_bindeps directory." exit 1 fi echo "Found log4j2.xml file at $APP_LOG4J2_FILE" # Modify the log level in the log4j2.xml file echo "Modifying log level in log4j2.xml file" sed -i 's/level="info"/level="${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}"/g' "$APP_LOG4J2_FILE" echo "log4j2.xml file modified successfully." echo "=========================== watchdog - appdynamics module ===============================" # Find the appdynamics directory APP_APPD_DIR=$(find "$PYENV_ROOT" -type d -name "appdynamics" -print -quit) if [ -z "$APP_APPD_DIR" ]; then echo "Error: appdynamics directory not found." exit 1 fi echo "Found appdynamics directory at $APP_APPD_DIR" # Find the proxy.py file within the appdynamics directory APP_PROXY_PY_FILE=$(find "$APP_APPD_DIR" -type f -name "proxy.py" -print -quit) if [ -z "$APP_PROXY_PY_FILE" ]; then echo "Error: proxy.py file not found within the appdynamics directory." exit 1 fi echo "Found proxy.py file at $APP_PROXY_PY_FILE" # Modify the log level in the proxy.py file echo "Modifying log level in proxy.py file" sed -i 's/logging.DEBUG if debug else logging.INFO/os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()/g' "$APP_PROXY_PY_FILE" echo "proxy.py file modified successfully." ``` ## Dockerfile Dockerfile to run pyagent with FastAPI and run this script ```dockerfile # Use a specific version of the python image FROM python:3.9 # Set the working directory in the container WORKDIR /app # First, copy only the requirements file and install dependencies to leverage Docker cache COPY requirements.txt ./ RUN python3 -m pip install --no-cache-dir -r requirements.txt # Now copy the rest of the application to the container COPY . . # Make the update_appdynamics_log_level.sh executable and run it RUN chmod +x update_appdynamics_log_level.sh && \ ./update_appdynamics_log_level.sh # Set environment variables ENV APP_APPD_LOG4J2_LOG_LEVEL="warn" \ APP_APPD_WATCHDOG_LOG_LEVEL="warn" EXPOSE 8000 # Command to run the FastAPI application with pyagent CMD ["pyagent", "run", "uvicorn", "main:app", "--proxy-headers", "--host","0.0.0.0", "--port","8000"] ``` ## Files changed by the script ### log4j2.xml ```xml <Loggers> <!-- Modify each <AsyncLogger> level as needed --> <AsyncLogger name="com.singularity" level="${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}" additivity="false"> <AppenderRef ref="Default"/> <AppenderRef ref="RESTAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> </Loggers> ``` ### proxy.py ```python def configure_proxy_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper() pass def configure_watchdog_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper() pass ``` ## Warning Please note, these paths and methods may vary based on your AppDynamics version and environment setup. Always backup files before making changes and be aware that updates to AppDynamics may overwrite your customizations. I hope this helps!
I'm trying to use Android Native Module on Flutter web(by WASM), and create Android module and imported on flutter app. It works on Android app but not on browser. `android/app/src/main/kotlin/com/example/flutter_application_1/MainActivity.kt` ```kotlin package com.example.flutter_application_1 import kotlin.random.Random import androidx.annotation.NonNull import io.flutter.embedding.android.FlutterActivity import io.flutter.embedding.engine.FlutterEngine import io.flutter.plugin.common.MethodChannel class MainActivity: FlutterActivity() { override fun configureFlutterEngine(@NonNull flutterEngine: FlutterEngine) { super.configureFlutterEngine(flutterEngine) MethodChannel(flutterEngine.dartExecutor.binaryMessenger, "example.com/channel").setMethodCallHandler { call, result -> if(call.method == "getRandomNumber") { val rand = Random.nextInt(100) result.success(rand) } else { result.notImplemented() } } } } ``` `lib/main.dart` ``` import 'package:flutter/services.dart'; import 'package:flutter/material.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({super.key, required this.title}); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; static const platform = MethodChannel('example.com/channel'); Future<void> _generateRandomNumber() async { int random; try { random = await platform.invokeMethod('getRandomNumber'); } on PlatformException catch (e) { random = 0; } setState(() { _counter = random; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text( 'Kotlin generates the following number:', ), Text( '$_counter', style: Theme.of(context).textTheme.headlineLarge, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _generateRandomNumber, tooltip: 'Generate', child: const Icon(Icons.refresh), ), ); } } ``` On Browser, this error continuously occured, but not on mobile(on Android, but not tested on iOS) [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/oKwXp.png I wonder that flutter WASM not yet supports the Native module importing.
Viber is not working on Ubuntu 22.04 Jammy
|linux|ubuntu|ubuntu-22.04|wine|viber|
null
I am making an AR app using Vuforia engine and my project needs a greater FOV to track images. I figured if I access the wide angle camera of my android phone instead of low resolution one that Vuforia accessed by default, my goal would be accomplished. So is there any way by which I can access the wide angle camera? Currently by default lowest resolution camera is used by Vuforia for image tracking. I need to access wide angle camera.
Looking to access wide angle camera of smartphone while making an AR application using Vuforia
|android|augmented-reality|virtual-reality|vuforia|
null
When I enter long text into my TextFormField A and jump to another widget (like another TextFormField) my TextFormField A sustains the cursor position at the end of the long text. I want the text to appear from starting position when I unfocus the widget. **situation** email field unfocused [![enter image description here][1]][1] **what I want** [![enter image description here][2]][2] I've tried textEditingController.selection = TextSelection.collapsed(offset: 0); but it takes effect only when I focus the widget again not at the moment I unfocus it (which is what I want). **UPDATE**: this is my code: class MyHomePage extends StatefulWidget { const MyHomePage({super.key}); @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { TextEditingController textEditingController1 = TextEditingController(); TextEditingController textEditingController2 = TextEditingController(); FocusNode focusNode1 = FocusNode(); FocusNode focusNode2 = FocusNode(); @override void dispose() { textEditingController1.dispose(); textEditingController2.dispose(); focusNode1.dispose(); focusNode2.dispose(); super.dispose(); } @override Widget build(BuildContext context) { return Material( child: Column( children: [ SizedBox( width: 100, child: TextField( focusNode: focusNode1, onEditingComplete: () { textEditingController1.selection = TextSelection.collapsed(offset: 0); focusNode1.unfocus(); FocusScope.of(context).requestFocus(focusNode2); }, controller: textEditingController1, textInputAction: TextInputAction.next, ), ), SizedBox( width: 100, child: TextField( focusNode: focusNode2, onEditingComplete: () { textEditingController2.selection = TextSelection.collapsed(offset: 0); focusNode2.unfocus(); FocusScope.of(context).requestFocus(focusNode1); }, controller: textEditingController2, textInputAction: TextInputAction.next, ), ), ], ), ); } } [1]: https://i.stack.imgur.com/iAUS4.png [2]: https://i.stack.imgur.com/5FnH4.png
I am trying to download file on lick of notification button but when application is in background then api call is not working. Notification Code: func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -> Void) { let userInfo = response.notification.request.content.userInfo as! [String: AnyObject] switch response.actionIdentifier { case "DownloadDutyPlanAction": print("Download tap") downloadDutyPlanFromNotification(fetchCompletionHandler: nil) completionHandler() case "CancelAction": print("Cancel tap") default: completionHandler() } } Network class: private var task: URLSessionTask? private var session = URLSession.init(configuration: URLSessionConfiguration.default, delegate: NSURLSessionPinningDelegate(), delegateQueue: nil) func request(forDownloadUrl route: EndPoint, completion: @escaping NetworkDownloadRouterCompletion) { do { let request = try self.buildRequest(from: route) task = session.downloadTask(with: request) { localURL, response, error in if let tempLocation = localURL { completion(response, error, tempLocation) } else { completion(response, error, nil) } } } catch { completion(nil, error, nil) } self.task?.resume() }
I am trying to download file on lick of notification button but when application is in background then api call is not working. Notification Code: func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -> Void) { let userInfo = response.notification.request.content.userInfo as! [String: AnyObject] switch response.actionIdentifier { case "DownloadDutyPlanAction": print("Download tap") downloadDutyPlanFromNotification(fetchCompletionHandler: nil) completionHandler() case "CancelAction": print("Cancel tap") default: completionHandler() } } Network class: private var task: URLSessionTask? private var session = URLSession.init(configuration: URLSessionConfiguration.default, delegate: NSURLSessionPinningDelegate(), delegateQueue: nil) func request(forDownloadUrl route: EndPoint, completion: @escaping NetworkDownloadRouterCompletion) { do { let request = try self.buildRequest(from: route) task = session.downloadTask(with: request) { localURL, response, error in if let tempLocation = localURL { completion(response, error, tempLocation) } else { completion(response, error, nil) } } } catch { completion(nil, error, nil) } self.task?.resume() }
Check the version of your llama_index if you are using the current version, kindly use : from llama_index.core.node_parser import SimpleNodeParser
`publishDir` works by emitting items in the process `output` declaration to the path provided. You haven't provided an output declaration for either process, so it doesn't think there is anything to publish. Also, unless you're using it for checkpointing, you don't need the output from `FastQC` for `Trimmomatic`, you can get the two processes to run in parallel. Don't use `joinPath` or any absolute path in your processes. That's not what Nextflow is designed for, and often will lead to errors. Plus, by putting an absolute path in the output declaration, you're telling the process to look in the output directory for the file generated in the process. Use `publishDir` to emit files. The `file` operator is deprecated. Use `path` instead. The [documentation](https://www.nextflow.io/docs/latest/getstarted.html) is amazing for nextflow. It's a steep learning curve, but it's very good at describing how things work. So here is an updated script: ``` process FastQC { tag "Running FastQC on ${fastq}" publishDir { path: "${params.fastqc_dir}/${fastq.baseName}", move: 'move', } input: path fastq output: path("*.html") script: """ fastqc ${fastq} """ } process Trimmomatic { tag "Trimming ${fastq.baseName}" publishDir { path: "${params.trimmed_dir}", move: 'copy', } input: path fastq output: path("*_trimmed.fastq.gz") script: """ java -jar ${params.trimmomatic_jar} PE -threads 4 \ ${read1} ${joinPath(trimmed_dir, "${read1.baseName}_trimmed.fastq.gz")} \ ${joinPath(trimmed_dir, "${read1.baseName}_unpaired.fastq.gz")} \ ${joinPath(trimmed_dir, "${read1.baseName}_unpaired.fastq.gz")} """ } ``` In the workflow, you shouldn't need to tell the processes to iterate over each element. This is the default behaviour of the tool. I've added some commands to the channel generation to highlight some redundancy you can add. ``` Channel .fromPath(${params.fastq_dir}/*{.fastq.gz,.fq.gz,.fastq,.fq}) .ifEmpty { error "Cannot find any fastq files in ${params.fastq_dir}" } .set { fastq_files } workflow { FastQC(fastq_files) Trimmomatic(fastq_files) } ```
|macos|yarnpkg|yarn-v2|
This code compiles and runs and works fine as a hash for an array of ints : def hashValue(a: List[int]) -> int: return tuple(a) But I don't understand what's going on under the hood. Does py maintain an internal hash of the immutable tuple ?
I find it to be similar to Java, but with more C-style functionality. Which I guess makes it better, since you can have [unions](https://stackoverflow.com/a/126807/16217248) and pointers (though pointers can only be used in `unsafe` code) but both are a staple in most of my programming.
null
I am trying to setup a set of computers to connect over an ad-hoc network, or mesh (on top of adhoc) using batman-adv. The adhoc network is able to establish a station, but I can't ping one computer from another. When using batman-adv to create a mesh on top of the adhoc network, the bat0 device never seems to have any traffic. Here is the setup for two computers: # Computer 1: ``` # /etc/config/wireless config wifi-device 'radio0' option type 'mac80211' option path 'platform/ahb/18100000.wmac' option channel '5' option band '2g' option htmode 'HT20' option cell_density '0' option hwmode '11n' config wifi-iface 'wifinet0' option device 'radio0' option network 'bat0' option mode 'adhoc' option ssid 'spiri-mesh' option bssid '02:CA:FF:EE:BA:BE' option mcast_rate '12000' # /etc/config/network config interface 'loopback' option device 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config globals 'globals' option ula_prefix 'fd6b:8d01:7c47::/48' config interface 'lan' option proto 'static' option ipaddr '192.168.1.2' option netmask '255.255.255.0' option gateway '192.168.1.1' option device 'br-lan' config device option name 'br-lan' option type 'bridge' list ports 'eth0' config interface 'bat0' option proto 'batadv' option routing_algo 'BATMAN_IV' option aggregated_ogms '1' option ap_isolation '0' option bonding '0' option fragmentation '1' option gw_mode 'off' option log_level '0' option orig_interval '1000' option bridge_loop_avoidance '1' option distributed_arp_table '1' option multicast_mode '1' config interface 'mesh' option proto 'static' option ipaddr '192.168.2.1' option netmask '255.255.255.0' option device 'bat0' config device option name 'bat0' option ipv6 '0' ``` # Computer 2: ``` # /etc/config/wireless config wifi-device 'radio0' option type 'mac80211' option path 'platform/ahb/18100000.wmac' option channel '5' option band '2g' option htmode 'HT20' option cell_density '0' option hwmode '11n' config wifi-iface 'wifinet0' option device 'radio0' option network 'bat0' option mode 'adhoc' option ssid 'spiri-mesh' option bssid '02:CA:FF:EE:BA:BE' option mcast_rate '12000' # /etc/config/network config interface 'loopback' option device 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config globals 'globals' option ula_prefix 'fd6b:8d01:7c47::/48' config interface 'lan' option proto 'static' option ipaddr '192.168.1.3' option netmask '255.255.255.0' option gateway '192.168.1.1' option device 'br-lan' config device option name 'br-lan' option type 'bridge' list ports 'eth0' config interface 'bat0' option proto 'batadv' option routing_algo 'BATMAN_IV' option aggregated_ogms '1' option ap_isolation '0' option bonding '0' option fragmentation '1' option gw_mode 'off' option log_level '0' option orig_interval '1000' option bridge_loop_avoidance '1' option distributed_arp_table '1' option multicast_mode '1' config interface 'mesh' option proto 'static' option ipaddr '192.168.2.2' option netmask '255.255.255.0' option device 'bat0' config device option name 'bat0' option ipv6 '0' ``` After setting up the config, I am using the following to start services: (Note: my wireless interface name shows up as phy0-ibss0) ``` # Load batman-adv module modprobe batman-adv # Replace phy0-ibss0 with your wireless interface name (use 'iw dev' to find it) batctl if add phy0-ibss0 # Activate the interfaces ifconfig phy0-ibss0 up ifconfig bat0 up # Assign IP address to bat0 interface (use different IPs for each device) ifconfig bat0 192.168.2.1 netmask 255.255.255.0 # Use 192.168.2.2 for the other device ``` What is going wrong here? I can't ping one computer from the other.
Code: @NoRepositoryBean public interface AbstractRepository <M extends AbstractModel> extends MongoRepository<M, String> { Page<M> findAll(Pageable pageable); Page<M> findAllByIsActive(boolean isActive, Pageable pageable); } And are other warning in findAll parameter: Missing non-null annotation: inherited method from PagingAndSortingRepository<M,String> specifies this parameter as @NonNullJava(67109781) whats wrong?
The return type is incompatible with '@NonNull Page<M>' returned from PagingAndSortingRepository<M,String>.findAll(Pageable)
|java|spring|spring-data-jpa|non-nullable|pageable|
# 2024 March 30th > **Maximum layer size** = 52,000 mebibytes (54,525 megabytes) > > https://docs.aws.amazon.com/AmazonECR/latest/userguide/service-quotas.html
macOS: Command not found: corepack when installing Yarn on Node v17.0.1