Document
stringlengths
395
24.5k
Source
stringclasses
6 values
I am honestly out of ideas.. Games keep crashing, Have tried everything. I have ; >Reinstalled windows on 4 different hard drives ( SSD , M2 , WD1tb, WD4tb ) , problem is still there. >Updated every single drivers >Tried rolling back my gpu drivers to a older version >Bought a new motherboard (Msi z270 M7) >Bought new RAM (corsair dominator 2x16gb ddr4 3000) >Tried using my friend's graphic card but still crash >Ran memtest + windows memory tool >passed >Ran multiple GPU test > passed >Ran Intel cpu processor tool > Passed >Uninstalling all programs that could be causing problems(discord,antivirus,ect) >All temps are normal As you can see, i have tried pretty much everything that i could think of. Csgo,overwatch,world of warcraft, keeps crashing at random intervals (1min-10mins) Sometimes i get some random blue screen All error i seem to get are related to RAM problems Blue screen error : ( stopcode PAGE_FAULT_IN_NONPAGE_AREA) World of warcraft error : (The reference at :0x00000f_____" referenced memory at "0x000000f____" The memory couldnt be executed" Overwatch error : Overwatch has crashed in the graphics driver Csgo error : no error ,the game just randomly crash to desktop with no codes or anyting. If i need to buy a new cpu or powersupply it is not a problem ,but considering i spent 1000$ on parts this week id like to be sure that the next part will fix my problem Cpu : i7 6700k Cpu cooler: corsair h110i gtx motherboard: msi z270 m7 (brand new) ram: 32gb dominator ddr4 3000 (brand new) gpu: EVGA gtx 1080 FTW psu: evga 850 gold+ drives: m2 250gb / 512gb samsung evo ssd / 4tb+1tb black WD os: windows 10 home thank you very much for your help, i am sincerly lost. edit: heres a picture of the WoW error i get, maybe it will help. https://gyazo.com/216ac2a2238bc27655199c7fbf7f07c4 ps: (Keep in mind that i just purchased new RAM and motherboard! (both installed today). Same problem Edited by jagoqc, 17 February 2017 - 03:32 PM.
OPCFW_CODE
Helping companies to adopt Google, AWS and Azure Cloud! Senior Cloud Architect, US (Remote) Our Core Values They are few, but we feel strongly about them. From creating processes to decision-making and recruiting, we build our five core values into nearly every single thing we do. - Independence. Others can trust that you’ll deliver on time and your teammates don’t need to worry about you keeping your word. - Mastery. You love what you do and care deeply about the quality of your work, down to the smallest details. - Communication. Your communication is clear, concise, and engaging whether you're explaining a complex idea or providing feedback. - Ambition. You set high standards for yourself and those around you. The time you spend on work isn’t measured by quantity, but by quality. - Impact. You're able to see the "big picture" and then solve issues that have a high impact on our customers, our team, and our company. Customer Reliability Engineering is what you get when you treat operations and data as if it’s a software problem. Our mission is to architect, build, secure and advise for our customer's software and systems, with a focus on availability, latency, performance, and capacity. This is an unusual job, unlike others in the industry. Like traditional operations groups, we design, build and help our digital-native customers to keep important, revenue-critical systems up and running despite downtimes, traffic outages, and configuration problems. Unlike traditional operations groups, we often have an ability and authority to fix, extend, and scale the systems to keep it working and harden it against all the vagaries of the internet. We hire people from both systems and software backgrounds. Strong candidates will have experience with both. This is a hands-on technical expert role with a high potential for learning new things and creating new experiences. If you are a positive-thinking, versatile technical leader who has that kind of i-want-to-know-everything drive, and you thrive in a fast-paced, startup-like environment, we want you on-board with our all-star winning team. CRE's culture of diversity, intellectual curiosity, problem solving and openness is key to its success. Our organization brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to create an environment that provides the support and mentorship needed to learn and grow. This is a remote job. You’re free to work where you work best: home office, co-working space, coffeeshops. However, given our existing team, we're specifically looking for someone from the Eastern, Central or Western US. We also need a reasonable overlap during a normal workday with the PT timezone, so we are not considering candidates from, say, India or Australia for this role. Target Locations: Remote anywhere in the US - Act as a trusted technical advisor to customers and solve complex infrastructure challenges - Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations - Communicate effectively via video conferencing for meetings and technical reviews - Experience with Linux internals and administration (e.g., filesystems, inodes, syscalls) or networking (e.g., routing, topologies) - Expertise in designing, analyzing and troubleshooting large-scale distributed systems - 2+ years of production Kubernetes experience - Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive - Ability to debug and optimize code and automate routine tasks Benefits and Perks: - Health Insurance including Medical, Dental, and Vision plans from top carriers - 401k matching - Employee Stock Option Plan - Maternity and Paternity Leave - Uncapped PTO - Flexible working options - Work life balance - Professional Development, including certifications Medical, Dental and vision (Depending on location) Dog friendly offices DoiT International at a glance DoiT International focuses on Cloud Computing, Consulting, Cloud Management, Cloud Data Services, and Cloud Infrastructure. Their company has offices in New York City, Austin, London, Santa Monica, and Melbourne. They have a mid-size team that's between 51-200 employees. To date, DoiT International has raised $100M of funding; their latest round was closed on October 2019.
OPCFW_CODE
Generating shapes based on child items in a view or a service In a Visio drawing linked to a view or a service, you can specify that a shape should be created automatically for every child item of that view or service. To implement this feature, do the following: Create the shapes representing the child items, and add the necessary child-level shape data fields to them. See Child-level shape data. Either create one general shape for all types of children (element, services, redundancy groups and views) or four different shapes (one for each type of child: element, service, redundancy group or view). Group those shapes, and add the necessary group-level shape data fields to the group. See Group-level shape data. For an example, see Ziine > Visual Overview Design Examples view > [children > ELEMENTS AND VIEWS] page. Child-level shape data The following shape data fields can be added to a shape that has to represent a particular type of child item. ChildMargin: In this optional shape data field, you can specify the space between the different child items within the container shape. - A space relative to the width of the shape representing a child item, or - A fixed space (in pixels). If the child shapes have a width of 100 px, the space between the child shapes will be 5 px (a twentieth of 100 px). The space between the child shapes will always be 5 px. ChildType: In this mandatory shape data field, specify the type of child item the shape has to represent: view, element, service or redundancy group. Value Description Element The shape will be used to represent an element. Service The shape will be used to represent a service. View The shape will be used to represent a view. RedundancyGroup The shape will be used to represent a redundancy group. Example: All child items of type "Element" will be represented by the shape of which ChildType is set to "Element". If you have set the shape data field of type ChildType to "Element", "Service" or "View", you can add an additional shape data field of type ChildrenFilter to indicate that you want that shape only to be used to represent elements, services or views that match a specific filter: Elements using a particular protocol or protocol version. Elements, services or views with a property set to a value matching a regular expression. Elements, services or views with a specific alarm severity. If a shape should only be used to represent elements using a particular protocol, add a shape data field to it of type ChildrenFilter, and set its value to "Protocol:xxx" or "Protocol:xxx/yyy". - xxx = the protocol name - yyy = the protocol version If a shape should only be used to represent elements if these have a property with a value matching a particular regular expression, add a shape data field to it of type ChildrenFilter, and set its value to "Property:PropertyName=ValueRegex". A property containing a space should be placed between double quotation marks. If there is a shape without protocol filter, that shape will be used to represent elements that do not match any of the other protocol filters you may have specified in other shapes. You can combine several filters using pipe characters ("|"), in which case all filters will need to match for a shape to be shown: From DataMiner 9.0.5 onwards, it is possible to only have shapes generated for child objects with a specific alarm severity. To do so, specify a comma-separated list of alarm severities as the value of the ChildrenFilter shape data field. For example, if you specify "AlarmSeverity:Critical,Major", shapes will only be generated for child object of which the alarm severity is either "Critical" or "Major". Shape data field Value ChildrenFilter AlarmSeverity:Critical,Major The Timeout alarm severity is currently not supported in the ChildrenFilter field. Using placeholders such as "[var:]" and "[param:]" within ChildrenFilter shape data is supported from DataMiner 9.6.4 onwards. This can for instance be used to filter the child shapes using a session variable in the filter value. From DataMiner 10.2.0/10.1.2 onwards, you can filter service, view and element children by name, by specifying a regular expression in the following format in the shape data: Name=Regex. For example, "Name=[var:userSpecifiedName]". Only objects of which the name matches the regular expression will be shown. From DataMiner 10.2.0/10.1.10 onwards, you can filter service children based on whether they are mapped resources, unmapped resources, or resources inherited from a resource pool. To do so, add a data field of type ChildrenFilter and set its value to "ResourceMapping=", followed by one or more roles (separated by commas): "mapped", "unmapped" or "inheritance". If you specify multiple roles, all shapes of which the roles match one of the specified roles will be shown. For example: Shape data field Value ChildrenFilter ResourceMapping=mapped,unmapped,inheritance Group-level shape data The following shape data fields can be added to the group containing the shapes that have to represent the different child items. Children: In this mandatory shape data field, specify the type of child items that have to be generated: "View", "Element", "Service" and/or "RedundancyGroup". In case of multiple types, separate them by pipes. Generate a shape for every view, element and service in the view or service to which the drawing is linked. Generate a shape for every element in the view or service to which the drawing is linked. - By default, a Children shape always shows the child items of the view or service to which the Visio drawing is linked. If you want a Children shape to show the child items of a specific view or service, then add a shape data field of type View or Element to that same shape. In that field, you can then explicitly specify the view or the service of which the shape has to show all child items. - A Children shape can contain another Children shape. That way, you can dynamically generate e.g. shapes that represent all subviews in a view, as well as shapes that represent all items in those subviews. - With the DataMiner Cube user setting Maximum number of child shapes in a 'Children' container shape, you can control the maximum number of Visio shapes allowed in a Children container shape. Default: 100. See Visual Overview settings. ChildrenOptions: In this optional shape data field, you can specify the following options: Value Description LazyLoading If the child shapes will be generated in a scrollable container shape (stack panel, wrap panel, etc.), use this option to configure lazy loading. Though the child shapes will then be generated immediately, they will only be initialized the moment they come into view. Recursive Also generate a shape for every view, element and service in all subviews and subservices of the view to which the drawing is linked. ShowHiddenElements From DataMiner 9.0.0 CU16/DataMiner 9.5.3 onwards, by default no shapes are displayed for hidden elements. To override this behavior, specify this option. When using the "Recursive" option, keep in mind that elements in services are always skipped. This is to prevent generating shapes for elements of which only certain parameters are included in the service. If shapes were generated for such elements, we would risk showing alarm states of parameters that are not included in a service. ChildrenSort: In this optional shape data field, you can specify how the different child item shapes should be sorted: Value Description Name Order by child item name. Property|PropertyName Order by the specified child item property. Severity Order by alarm severity. Also, you can specify a sort order by adding one of the following suffixes: Suffix Description ,asc Ascending (default sort order) ,desc Descending - Sort by severity, descending: - Sort by property "MyProperty", ascending: Alternatively, from DataMiner 9.0.5 onwards, for shapes that are automatically generated to represent alarms, you can specify the name of any Alarm Console column as the value to sort the shapes. Like in the Alarm Console, the shapes will then first be sorted by the specified column, and then by time. For example, if you add a shape data field of type ChildrenSort, and configure the following value, the shapes will be sorted by element name in descending order (i.e. Z to A): Element Name, desc From DataMiner 10.2.0/10.1.1 onwards, placeholders such as [var:VariableName] can be used in ChildrenSort shape data. See Placeholders for variables in shape data values. - Sort by severity, descending: ChildrenPanel: In this optional shape data field, you can specify how the child items have to be organized within the container shape. ElementOptions: In this optional shape data field, you can specify that child shapes should only be generated for child items of a service that are in use, not in use or excluded:
OPCFW_CODE
Building Apps & Integrations Thinkific Apps allows you to build custom solutions and connections that extend and expand the functionality Thinkific provides out-of-the-box. Take control of the student experience and create powerful, customized experiences for educators and their students using apps. Thinkific Apps can be used to build solutions that exist entirely in the Thinkific platform or integrations that connect Thinkific with outside tools. Thinkific has 40k+ course creators who are helping 20m students globally so we are excited to provide an opportunity for app developers to earn revenue and attract new customers for their apps! In this guide we will provide more information on: - What's the difference between an app and an integration? - What kind of apps can I build? - What can apps do? - Does Thinkific have an Apps Store? - What apps do Thinkific customers want? - How do apps work? In Thinkific, apps and integrations are treated the same when it comes to functionality and process. Whether you are building a solution that you Identify as an Integration between Thinkific and another service, or an app that is built to extend the native functionality of Thinkific, you will need to register an app with Thinkific, and you will need to follow the same submission process to be listed in the Thinkific App Store. The difference between these pieces of functionality may arise in the future in categorization in our future App Store and in the Co-marketing channels available. An app provides the opportunity for you to build a solution on top of Thinkific’s platform using the OAuth mechanism which our customers can install onto their sites. If this use case does not work for your situation, you can also continue to use our APIs to build an integration between your platform and ours. See here for more details. This guide assumes that you are looking to build a public app that can be made available to Thinkific customers. If you are looking to build solutions for your own personal use, an app is not necessary and you can access the API using your site’s API Key. Learn more on how to authenticate using API Key Public apps are installable on paid Thinkific sites and can request official approval from Thinkific to be made available in the future Thinkific App Store. For more information on the process for submitting and getting a public app to our customers please see here. - Use Public API to read, write or delete data from your site. Examples include apps which help course creators automate processes, communicate with their students off platform etc. - Use Webhooks to listen for actions taken inside Thinkific sites, like when students complete a lesson or a course or when they make purchases. We are currently working on building out our API capabilities to provide our app developers with even more opportunities to extend and customize the Thinkific experience for our course creators. Please sign-up to our Partner Program and submit any feedback via our partner portal! Currently, approved apps & integrations are listed on our public facing Integrations Page. We are currently in search of new apps that our customers will love and developers who want to work with us to build the future of Thinkific and be featured in a future Thinkific App Store! There is so much opportunity to build innovative solutions for our customers. We are particularly focused on encouraging app developers to build solutions which support our course creators across three main areas: - Apps which helps our course creators save time and money - Apps which help our course creators build more revenue and sell more courses - Apps which help our course creators to improve the learning experience for their students If you are looking for more ideas we can recommend looking at our Facebook group which has 20k active course creators discussing their needs! In order to build apps in Thinkific, you will need a Thinkific Partner account. You can register for a partner account at www.thinkific.com/partners and learn more about Getting Started as a Thinkific Partner here. All apps are managed using your Partner account by going to partner.thinkific.com and navigating to apps.
OPCFW_CODE
Prometheus: The angel-eyes watching your Kubernetes Cluster Once the Kubernetes cluster is set up and your application is up and running, we move on to the part where we keep a watch on the performance metrics. To ensure that everything is working as it is supposed to be working without any anomalies. Application hosting is a complicated activity. A lot of things can go wrong when hosting an application. There are hundreds of minute things which can bring down your application and only the right monitoring solution can keep you ahead of them. More often than not the usual culprits are the basic ones: 1. Storage non-availability 2. CPU Usage Exhaustion 3. Memory non-availability The monitoring that we set-up has to look-out for the above most fundamental performance issues on the Kubernetes Cluster at a regular frequency. An ideal monitoring solution would be that which can give us performance metrics in real-time, has an efficient alerting system and is detail oriented in a time-series based system. Prometheus is an open-source monitoring and alerting solution for your Kubernetes Cluster. It is so popular that it is almost synonymous with Kubernetes. If Kubernetes is your hosting solution, it is axiomatic that you use Prometheus for monitoring. While AWS CloudWatch does offer a default monitoring solution at the AWS infrastructure level, Kubernetes components and application-level monitoring is better handled by Prometheus. Using Time Series Database (TSDB), Prometheus records and stores pure numeric time series. Metrics, unique identifiers, and timestamps are collected, organized, and stored. It scrapes the HTTP endpoints of configured targets, like bare metal servers, databases, Kubernetes clusters, and applications that expose important metrics. It comes with its companion Alert Manager. However, on this project we have not used alerts from Prometheus and instead used the Alerting service on Grafana. Which we will see in detail a few paragraphs later. Installation on EKS: The most easy and effective way of installing Prometheus on an EKS Cluster is by using a Helm Chart. After installing Helm on your Kubernetes Cluster, you just have to run the following commands in the sequence and voila your Cluster now has the top-notch monitoring solution installed. helm repo add prometheus-community htt ps://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/prometheus Once the installation is complete, we have to expose the Prometheus service using node port on port 9090. This can be done using kubectl kubectl expose service prometheus-server - type=NodePort - target-port=9090 - name=prometheus-server-ext The main functionality of the Prometheus server is to collect metrics from the cluster at a regular interval and store them in a time series DB. These metrics when viewed in a GUI gives a precise idea of how the cluster is functioning in a time-series format. This then can be used for creating a presentation layer for analysis. Visualization is not one of the strengths of Prometheus and it becomes apparent the moment you first set sight on its GUI. To put it mildly, it is not as user-friendly as you would want it, nor does it allow you much customization of the presentation part. This is the reason why a lot of people prefer to use Grafana instead which is a fantastic tool that is better at Visualization and has amazing graphics making it an obvious choice. Grafana is a very extensively used visualization and analytics tool. Data from your time-series database (TSDB) can be turned into graphs and visualizations. With the help of the tools we study, analyze and monitor data over a particular period of time technically known as time series analytics. It offers a lot of flexibility in creating the dashboards and the tile panels with individual metrics inside the dashboards. It is immensely user-friendly and has a plethora of visualization graphs etc. One of the greatest strengths of Grafana is its ability to bring metrics from different data sources together on a single platform. You have data from Prometheus, Loki, AWS CloudWatch agent, InfluxDB, Graphite and a myriad of other data sources which can be used in conjunction with each other. The result can be a remarkably clear picture of the performance metrics of your application deployment with all the data sources working in a symphony of sorts. There are two ways of using Grafana, we either create a server for Grafana on the Cluster itself by installing it using Helm and configuring it to be used from the Cluster and accessing it exactly how we did with Prometheus. The other way is to simply use the Grafana cloud. Which is exactly what we did for our project as it allows for saving some of our resources in the development environment. Following are the steps to use Grafana Cloud. - Register on https://grafana.com/ - Scroll to ‘manage your Grafana Cloud Stack’ and click on ‘Launch’ on the Grafana tab. - Start building your dashboard. After creating the Grafana cloud account we now need to add the data sources which we are using. Below is how we do it: - Go to settings - Click on ‘Add data source’. - Select Prometheus and provide the service endpoint. Save the changes. - Create a new Dashboard. - Select your data source. (Prometheus in our case) - Create dashboards selecting the metrics for which you would like the visualization. Alerts in Grafana can be set up using alert rules. We create rules which act as queries like CPU usage at 40% for example would be a query that could be the trigger. So whenever the CPU utilization on the cluster touches 40%, the alert is triggered and the contact points are sent messages as per configuration. These queries can be configured as per client requirement. We had set up three alerts for the development environment. - CPU utilization alert at 50% usage - CPU usage at 70% usage - Memory utilization at 40% - Memory utilization at 60% After the alerts are set-up, we need to create contact points to send the alerts. Grafana supports alerts to be sent to contact points via Slack, Email, Amazon SNS, Elastalert, Zabbix, Datadog, Pagerduty at this time.
OPCFW_CODE
The modules are all generated from the GTK+ C source code and the documentation refers specifically to operations in C. Most of it is converted on the fly into Raku types or Raku native types. Sometimes, however, there is a mention of an operation like for instance, referencing or un-referencing objects. Those parts must be investigated still to see what the impact exactly is in Raku. - Toplevel classes are classes who inherit directly from Gnome::N::TopLevelClassSupport. Examples of such classes are Gnome::GObject::Object and Gnome::Glib::Error. - Object classes are classes which inherit directly or indirectly from Gnome::GObject::Object. - Widget classes are also indirectly inheriting from Gnome::GObject::Object but made a special category here to easily find a user interface class. - Interface classes. Most types in an application will be classes — in the normal object-oriented sense of the word — derived directly or indirectly from the root class, Gnome::GObject::Object. There are also interfaces, which can contain implemented methods. These are mixed in in the appropriate class. E.g. the interface Gnome::Gtk3::Buildable is mixed into the Gnome::Gtk3::Widget class. - Boxed classes. Some data structures that are too simple to be made full-fledged class types. An example is Gnome::Gdk3::RGBA which holds only a few numbers representing the RGB colors and Alpha channel. It would be too much to let it inherit from Gnome::GObject::Object. - Standalone classes are classes which do not inherit from other classes. Most of the time they even do not have a native object to work with. An example is Gnome::Glib::Quark. Deprecated classes in GTK+ Version 3 The following modules will not be implemented in this Raku package because they are deprecated in the GTK libraries. There is no reason to have people use old stuff which is going to disappear in version 4. - GtkSymbolicColor — Symbolic colors - GtkGradient — Gradients - Resource Files mentioned here — Deprecated routines for handling resource files. In GTK+ 3.0, resource files have been deprecated and replaced by CSS-like style sheets, which are understood by Gnome::Gtk3::CssProvider. However, there are methods like gtk_builder_add_from_resource() in Gnome::Gtk3::Builder which load files from directories from the so called resources path. This is an entirely different matter. Definitions and modules for that kind of resources are found in Gnome::Gio::Resource. - GtkStyle — Deprecated object that holds style information for widgets - GtkHScale — A horizontal slider widget for selecting a value from a range - GtkVScale — A vertical slider widget for selecting a value from a range - GtkTearoffMenuItem — A menu item used to tear off and re-attach its menu - GtkColorSelection — Deprecated widget used to select a color - GtkColorSelectionDialog — Deprecated dialog box for selecting a color - GtkHSV — A “color wheel” widget - GtkFontSelection — Deprecated widget for selecting fonts - GtkFontSelectionDialog — Deprecated dialog box for selecting fonts - GtkHBox — A horizontal container box - GtkVBox — A vertical container box - GtkHButtonBox — A container for arranging buttons horizontally - GtkVButtonBox — A container for arranging buttons vertically - GtkHPaned — A container with two panes arranged horizontally - GtkVPaned — A container with two panes arranged vertically - GtkTable — Pack widgets in regular patterns - GtkHSeparator — A horizontal separator - GtkVSeparator — A vertical separator - GtkHScrollbar — A horizontal scrollbar - GtkVScrollbar — A vertical scrollbar - GtkUIManager — Constructing menus and toolbars from an XML description - GtkActionGroup — A group of actions - GtkAction — A deprecated action which can be triggered by a menu or toolbar item - GtkToggleAction — An action which can be toggled between two states - GtkRadioAction — An action of which only one in a group can be active - GtkRecentAction — An action of which represents a list of recently used files - GtkActivatable — An interface for activatable widgets - GtkImageMenuItem — A deprecated widget for a menu item with an icon - GtkMisc — Base class for widgets with alignments and padding - Stock Items — Prebuilt common menu/toolbar items and corresponding icons - Themeable Stock Images — Manipulating stock icons. Since GTK+ 3.10, stock items are deprecated. You should instead set up whatever labels and/or icons you need using normal widget API, rather than relying on GTK+ providing ready-made combinations of these. - GtkNumerableIcon — A GIcon that allows numbered emblems - GtkArrow — Displays an arrow - GtkStatusIcon — Display an icon in the system tray - GtkThemingEngine — Theming renderers - GtkAlignment — A widget which controls the alignment and size of its child
OPCFW_CODE
Incorrect AST text for Change Modifier pattern? The content of sourceBeforeFix and sourceAfterFix for CHANGE_MODIFIER pattern seems to be incorrect: { "bugType": "CHANGE_MODIFIER", "fixCommitSHA1": "de55ca810b70f8a2dd144d409cb2491dceb16286", "fixCommitParentSHA1": "5b7edd00f3941e1fa5ce5bc4dae788d2fab1042d", "bugFilePath": "core/server/worker/src/main/java/alluxio/SessionInfo.java", "fixPatch": "diff --git a/core/server/worker/src/main/java/alluxio/SessionInfo.java b/core/server/worker/src/main/java/alluxio/SessionInfo.java\nindex 51a2fa5..11432aa 100644\n--- a/core/server/worker/src/main/java/alluxio/SessionInfo.java\n+++ b/core/server/worker/src/main/java/alluxio/SessionInfo.java\n@@ -24,7 +24,7 @@\n private final long mSessionId;\n \n private long mLastHeartbeatMs;\n- private int mSessionTimeoutMs;\n+ private final int mSessionTimeoutMs;\n \n /**\n * Creates a new instance of {@link SessionInfo}.\n", "projectName": "Alluxio.alluxio", "bugLineNum": 27, "bugNodeStartChar": 829, "bugNodeLength": 30, "fixLineNum": 27, "fixNodeStartChar": 829, "fixNodeLength": 36, "sourceBeforeFix": "2", "sourceAfterFix": "18" }, { "bugType": "CHANGE_MODIFIER", "fixCommitSHA1": "9d46108b66684d82750a5ad01460ab5bc1c16720", "fixCommitParentSHA1": "096ec2aa953065fca5141d415d74eaf00e5aa637", "bugFilePath": "components/camel-mock/src/main/java/org/apache/camel/component/mock/AssertionTask.java", "fixPatch": "diff --git a/components/camel-mock/src/main/java/org/apache/camel/component/mock/AssertionTask.java b/components/camel-mock/src/main/java/org/apache/camel/component/mock/AssertionTask.java\nindex 347ba0d..e5022b6 100644\n--- a/components/camel-mock/src/main/java/org/apache/camel/component/mock/AssertionTask.java\n+++ b/components/camel-mock/src/main/java/org/apache/camel/component/mock/AssertionTask.java\n@@ -26,6 +26,6 @@\n *\n * @param index the n\u0027th received message\n */\n- public void assertOnIndex(int index);\n+ void assertOnIndex(int index);\n \n }\n", "projectName": "apache.camel", "bugLineNum": 24, "bugNodeStartChar": 1015, "bugNodeLength": 150, "fixLineNum": 24, "fixNodeStartChar": 1015, "fixNodeLength": 143, "sourceBeforeFix": "1", "sourceAfterFix": "0" } Ok, I found out that it is the same bug as the one you've mentioned in the readme for Missing and Delete Throws Exception.
GITHUB_ARCHIVE
"""Base for probabilistic models.""" class ProbabilisticModel(object): @classmethod def parse_query_string(cls, query_string): """Parse a query string into a tuple of query_dist, query_values, evidence_dist, evidence_values. The query P(I,G=g1|D,L=l0) would imply: query_dist = ('I',) query_values = {'G': 'g1'} evidence_dist = ('D',) evidence_values = {'L': 'l0'} """ def split(s): dist, values = [], {} params = [] if s: params = s.split(',') for p in params: if '=' in p: key, value = p.split('=') values[key] = value else: dist.append(p) return dist, values query_str, given_str = query_string, '' if '|' in query_str: query_str, given_str = query_string.split('|') return split(query_str) + split(given_str) @classmethod def create_query_string(cls, qd=None, qv=None, ed=None, ev=None): """Generate a query string.""" qd_str = ','.join(qd) if qd else '' qv_str = ','.join([f'{k}={v}' for k, v in qv.items()]) if qv else '' ed_str = ','.join(ed) if ed else '' ev_str = ','.join([f'{k}={v}' for k, v in ev.items()]) if ev else '' Q = ','.join([q for q in [qd_str, qv_str] if q]) E = ','.join([e for e in [ed_str, ev_str] if e]) return '|'.join([p for p in [Q, E] if p]) def compute_posterior(self, qd, qv, ed, ev): """Compute the (posterior) probability of query given evidence. The query P(I,G=g1|D,L=l0) would imply: qd = ['I'] qv = {'G': 'g1'} ed = ['D'] ev = {'L': 'l0'} Args: qd (list): query distributions: RVs to query qv (dict): query values: RV-values to extract ed (list): evidence distributions: coniditioning RVs to include ev (dict): evidence values: values to set as evidence. Returns: CPT """ raise NotImplementedError def P(self, query_string): """Return the probability as queried by query_string. P('I,G=g1|D,L=l0') is equivalent to calling compute_posterior with: query_dist = ('I',) query_values = {'G': 'g1'} evidence_dist = ('D',) evidence_values = {'L': 'l0'} """ qd, qv, gd, gv = self.parse_query_string(query_string) return self.compute_posterior(qd, qv, gd, gv) # def MAP(self, query_dist, evidence_values, include_probability=True): # """Perform a Maximum a Posteriori query.""" # d = self.compute_posterior(query_dist, {}, [], evidence_values) # evidence_vars = [e for e in evidence_values.keys() if e in d.scope] # # d = d.droplevel(evidence_vars) # # if include_probability: # return d.idxmax(), d.max() # # return d.idxmax()
STACK_EDU
Thank you for your reply. (Though I don’t understand the terseness. Is my question answered too many times? I searched and nothing came up. Additionally, as you may see, I have done a bit of digging and got stuck.) - and 3. Is this what you’re referring to on github, GitHub - libre-computer-project/libretech-u-boot? Do you mind providing links? I cannot find any repo that involves building .xz’s for distros. Nor can I find which repo the boot CI artifacts are built from (and using which build scripts+github actions, etc.). I’m trying to build my own custom Debian-based, “preinstalled” distro, hence the above questions. The distributions released by libre.computer if made by a proprietary dist builder, are proprietary software. The users CANNOT study what has gone into those PARTICULAR distributions (the .xz’s you distribute) to be sure the proprietary dist builder libre.computer use has not added malicious code during build (and nor can libre.computer for that matter); just like RELEASES built by a proprietary compiler are proprietary software. Reflections on trusting trust by Ken Thompson (Reflections on Trusting Trust : Ken Thompson : Free Download, Borrow, and Streaming : Internet Archive) comes to mind. This is bad practice at best and malicious at worst. Why is Libre.Computer using a proprietary distro builder? The distributions released by libre.computer if made by a proprietary dist builder, are proprietary software. This is not true. We’re releasing images, not binaries. You can rip open them and diff against a bootstrap to get a detailed list of the changes. How we make those changes are proprietary, but those changes are not proprietary. You can strap your own Linux, buildroot, or yocto if there’s no implicit trust. All the components we use in the final image are open source outside of the non-open-source things that the distro provides like firmware. This is bad practice at best and malicious at worst. Whyis Libre.Computer using a proprietary distro builder? You are entitled to your opinion. We already said why. Bother to read and use some logic instead of making inflammatory remarks. Thanks again for your quick reply. To clarify, I appreciate the work Libre.Computer is doing to provide the community with SBCs that work with Free Software (apart from the non-free software that has been forced into the ‘default’ kernel Linux upstream, which is a different discussion). I made no inflamatory remark; I provided a spectrum “at best and at worst”. That’s why I asked, why put the work in jeopardy by using a proprietary builder. I did read and re-read your reasoning, “customer IP”. I don’t understand. If all software used in the images are Free Software, what customer IP are you referring to? In any case, if users could use free software to build bit-identical images to yours, then why don’t libre.computer use a similar free-software distro builder? Our builders are a propretiary service offering and offer integrations with proprietary components to generate proprietary images for our commercial customers. We just happen to also use the same infra to build the FOSS distros. Since these proprietary components are within the builder, we cannot release the builder. The FOSS distros just include config changes since it’s our policy not to modify the upstream distros. The package list not from the upstream distro is just our kernel, our userspace hardware utilities, and our bootloader. All of these are open source on GitHub. We make config changes to make the out of box experience slightly better/faster. Due to the segmented (proper) way we do things on the FOSS side (bootloader/kernel/userspace), there’s not a lot of magic to the builder that warrants a release since it can be done via standard toolchain for bootstrapping images. The config changes can be derived from a diff. - dpkg preseed - grub configuration - disk expansion - swap setup - network setup - sound setup - locale setup - GUI/browser configuration These are < 100 lines of configuration. Setting up the builder is fairly involved and not designed for end-users anyway since there’s other processes that goes with it. The only difference for the images for each board is the bootloader. The entire disk is the same. why don’t libre.computer use a similar free-software distro builder? Our builder does more than images and is tailored for our products, services, and processes like testing and release. The core of the building is debootstrap.
OPCFW_CODE
Making space between tight line features on different scales in QGIS In QGIS 3.16 I have a shapefile layer where linear features will often be digitised close to each other. These lines are all features in the same layer. Example when zoomed in: Problem when zooming out to export wider map: When I zoom out, the 3 line symbols overlap each other. Making the lines thinner doesn't help as they 1) still overlap, and 2) are simply too thin for this scale map. I want the lines to retain a decent thickness but not battle each other for space. I want them to automatically detect they are close to each other and line up side by side, or spaced out a bit for clarity. @Babel offseting/generalizing the geometry would surely offset each line by the same amount. ie. it'd just shift the problem I'm having to a new location. In the example above I'd ideally want the orange line symbol to stay in the same place, the pink line to shift north and the blue line to shift south. Of course, my thoughts are not a ready-made solution, but it could be a starting-point to think about how to offset just the lines at the outside, but not the one in the middle. But there might be indeed easier options. These are for overlapping line but could give you some ideas : https://gis.stackexchange.com/questions/261654/automatic-bus-route-map-without-overlaps?r=SearchResults and https://gis.stackexchange.com/questions/277958/how-to-offset-lines-in-qgis-that-share-the-same-origin-and-destination?r=SearchResults What you want to do is a cartographic generalization operation called displace. You might have a look at GRASS v.generalize, it has an option for displacement. Principal idea of the solution What I propose, however, is using geometry generator with the function offset_curve( ) and for the distance use the variable @map_scale, divided by a constant. How to implement it The line in the middle should not be offset at all and the lines at the right and left should be offset in the respective direction. For this, create a new attribute offset and add for each line a value of 0 (no offset), 1 (offset to the left) or -1 (offset to the right). You could also set other numbers to get different offset distances for different roads: the higher the number, the higher the offset. Combine these elements with geometry generator and this expression (change 500 to a value that fits your data): offset_curve ($geometry, "offset" * @map_scale/500) When you now zoom out, the lines to the right and left move away from the line in the center: Screenshot 1: black solid line: mid-line (no offset), black dotted line: original line that must be offset, red-lines: offsetted lines: Screenshot 2: result when zooming out:
STACK_EXCHANGE
Division of Television - Digital Media Group Institute of Radioelectronics Warsaw University of Technology Hardware implementation of H.264/AVC and H.265/HEVC video codecs Research descriptionThis research aims at designing and prototyping a multi-view low-delay video coding system. Its practical effect will be an application running on a hardware platform demonstrating real-time multi-view video coding. The digital part of the system is integrated in a single FPGA programmable circuit. Hardware acceleration and new coder architectures will improve video compression compared to existing solutions. Hardware implementations designed during this research finally should possess the following features: - Low delay. - High throughput: real-time processing of high resolution and multi-view video demands massively parallel architectures. - A compression system implemented on a single FPGA chip according to SoC methodology. - Advanced choice of coding mode based on human visual system quality measures. - Multi-view bitrate control: designed architectures should adapt to variable video content and channel capacity using statistical multiplexing. - Adaptive motion and disparity estimation allowing for random generation of search points. The H.264/AVC standard allows for a high compression efficiency at the cost of computational complexity. To achieve the efficiency as high as possible, the designed architecture supports the mode selection based on the rate-distortion optimization. In particular, the dataflow assumes throughput of 32 samples/coefficient per clock cycle (in the main/reconstruction loop), on average, allowing a lot of compression options to be checked. Moreover, the architecture supports all transform sizes specified for High Profile using the same hardware resources. The binary coder conforms to H.264/AVC High Profile and supports two binary coding modes: Context Adaptive Binary Arithmetic Coding (CABAC) and Context Adaptive Variable Length Coding (CAVLC). Frame/Field input video can be compressed using 4:2:2 or 4:2:0 formats. The architecture saves a considerable amount of hardware resources since two coding modes share the same logic and storage elements. Five versions of the arithmetic coding path are developed to study the area/performance trade-off related to parallel symbol encoding. The implementation results show that the parallel symbol encoding allows higher efficiency. Synthesis results show that the whole design can work at 100 MHz for FPGA Stratix II and Aria II devices. This frequency allows 1080p at 30 fps. Additionally, hardware AAC codec and TS (de)multiplexer are developed to support the audiovisual communication for low delay. The video/audio (de)coding and streaming is developed in a set of PCBs stacked on one another as a "sandwich". They communicate via EPI, I2C and I2S buses. The coder/decoder modules for Audio MPEG4-AAC and video H.264/AVC are implemented in FPGA chip together with interfaces for the above-mentioned external buses and BT656 (alternatively other standards for HD) . Fig. 8 shows the schematics of the development kit. It consists of three boards: P2 - FPGA, P1 - ARM uC and converters, P0 - I/O connectors. The audio codec usesADAU1361 chip, the video coder uses ADV7403 and video decoder - ADV7393. All these chips are controlled via I2C from LM3S9B95 microcontroller. The FPGA board is connected to the others with 100-pin connectors. It contains also external memory: 8 DDR2 chips, 2 SRAMs and 1 FLASH.
OPCFW_CODE
Cat Ear Tipping Uk Cat Ear Tipping Uk - Cat Meme Stock Pictures and Photos It prevents a cat from being trapped a second time, and put under anesthesia unnecessarily. Cat ear tipping uk. Ear infections often cause a cat’s ears to become more red and swollen than do mite infestations, and the discharge from an infected ear tends to have a distinctly foul odor. It is the universally accepted way to signify that a community cat has been spayed or neutered, which means no new kittens will be born, and that’s a good thing. Cat's ear | identify that plant. The ear heals quickly after it has. This will help ensure that none of the solution gets into the ear canal. “trap, test, spay or neuter, vaccinate and release.”. If nothing else is done, it would be difficult to know if a cat is sterilized or not. The procedure means that a neutered cat can be spotted from a distance. This is called ear tipping which is not equivalent to ear cropping for cosmetic reason for dogs. Various sources and believed to be in the public domain. Ear tipping is not cruel as it is performed by a veterinarian while the cat is under anesthesia during spay/neutering, so the cat won’t feel it. Great cat ear tipping 31 on interior design for home remodeling with cat ear tipping top cat ear tipping 50 about remodel home design furniture decorating with cat ear tipping great cat ear tipping 66 in small home decoration ideas with cat ear tipping see also cool cat shirts. If you are a cat owner or want to own a cat, you should definitely be able to distinguish the differences between breeds. These cats can then be returned to the site. Some people don’t like this. Check out our ear tip cat selection for the very best in unique or custom, handmade pieces from our shops. It is when the end section of ear is removed by a veterinary surgeon whilst the cat is under general anaesthetic, to protect the cat from being trapped and/or operated on again. When neutered, a newer vet did ear tipping instead of the notching. - Cat Ninja Games Unblocked - Cat Palm Care Watering - Cat Mini Excavator For Sale Craigslist - Cat Paw Swollen No Pain - Cat Palm Care Yellow Leaves - Cat Palm Okay For Cats - Cat Person Cat Food Near Me - Cat Paw X Ray Real - Cat Petting Guide Meme - Cat Palm Care Tips - Cat Petting Guide Shirt - Cat Ninja Game Fort - Cat Mini Excavators Canada - Cat Mouth Foaming In Car - Cat Nasal Cancer Euthanasia - Cat Nose Cancer Pictures - Cat Mini Excavators For Sale Near Me - Cat Mini Excavators For Sale In Ohio - Cat Paw Prints Walking - Cat On Roomba Hits Dog
OPCFW_CODE
The Synology DSM gives administrators the ability to utilize Dynamic DNS (DDNS) services to access your devices with vanity/custom domains. However, the list of compatible Service Providers is fairly limited. Luckily, we can do a little DNS trickery (CNAME record) to get around the issue for any domain. Start by opening Control Panel in DSM, External Access, then choose Add on the DDNS tab. Walk through the wizard to create a hostname using the Synology service provider. Check out the Synology documentation for more details on this process. For this example, let’s use: caroledidit.synology.me. When complete, you will have the new DDNS hostname listed. This domain name will always point to the dynamic IP address given to your network by your ISP. Purchase a domain from any registrar that allows you to control basic DNS records. For this example, we’ll use: longlivedonlewis.com. Using the registrars platform, create a new CNAME DNS record for your domain. The CNAME record values for longlivedonlewis.com would be: Host = @ Value = caroledidit.synology.me TTL = 5min As a CNAME record is an alias, longlivedonlewis.com would always refer to the IP address caroledidit.synology.me is pointed to. The short Time To Live (TTL) ensures that caches expire quickly enough to stay in sync with the dynamic record. The custom domain will now point to your network. If you haven’t done so already, you’ll need to configure your network allow your device to communicate with the Internet. Visit Wikipedia to learn more about CNAME records: https://en.wikipedia.org/wiki/CNAME_record 5 replies on “Synology DDNS with custom domains” Nice little tutorial Corey! I have been running a synology NAS for a few year. We use it for so many things. Just a recommendation, there are a ton of people scanning ports from Russia and China. You will want to make sure the firewall is enabled and you can set access for geolocation. Thanks, man! Any nation on a naughty list is getting stopped at the front door. So I should definitely do the firewall thing… except I have no idea what I’m doing… I get excited when the damn thing actually connects with my SSL engaged instead of giving me a certificate error! LOL Do you happen to be willing to post instructions how to do that? 🙂 Great article, Corey! Simple, straight to the point. Very handy. One question– how does a CNAME redirect(?) like this impact the use of SSL certificates? TTL=5 minutes, I hadn’t thought of that; great tip, thank you!
OPCFW_CODE
As any other Digital Biological Cell, Digital Biological Neuron laws have to conform with the following rules: - Every cell has an internal state. This notion has not been observed; it was introduced. - The internal state of the cell is determined by receptors (ρ) and ligands (λ). - The production of the cell is determined by the internal state and genetic information Before we get to the laws, we should get familiar with basic notions as shown on the picture. The definitions and the laws might seem vague, however it improves overall readability and presents the theory from a higher level. For those interested in exact mathematic formulations, please do not hesitate to contact us. Neuron (DBN) is inspired by biological observations of an eukaryotic cell - neuron. We do not simulate neuron's behavior - we define it by four laws. Mediator is the only mean of communication between neurons. Each neuron produces various types of mediators and when neuron fires, it releases them. Mediators could be caught later by other connected neurons. Receptor is tightly connected to a neuron and allows the neuron to catch a mediator. For each type of transmitted mediator there is a different receptor. Synapse. Neurons are connected to other neurons by receptors, and the place of connection is called synapse. The synapses are unidirectional; therefore if a neuron N is connected to another neuron M, every receptor in that synapse belongs to neuron N. Internal state. Each neuron must be in exactly one state at a time. Genetic information tells a neuron, which mediators are currently in production. Genetic information is the same for all the neurons within the simulation and does not change. And here are the four laws of DBN: Fire law: When a neuron catches mediators, later it fires mediators too. The type (and quantity) of fired mediators depends on internal state of the neuron and genetic information. Mediator law: Neuron can catch mediators only if it has correspondent receptors. The amount of caught mediators depends on number of receptors and is the only information to set the internal state of the neuron. Receptor law: If neuron's receptors caught some mediators lately, then the neuron will extend the number of these receptors. The receptors will be added to the same synapse, which caught the mediators. Number of receptors in a synapse is limited. Synaptic law: If neuron has not been firing for some time, it can connect (create a synapse) to some neuron, which has been firing lately. The fire law describes conditions when neuron "fires" and continues to spread signal over the network. This law alone allows to see how signals can spread across networks, how signals can quantitatively influence each other, how to simulate intensity of a signal and some thoughts over signal running in cycles. The mediator law shows us that some types of mediators can be caught by connected neurons, while some other cannot. This behavior allows developing a theory of qualitative signal interaction. Some signals can be reinforced, why other can be suppressed. The receptor law tells us the conditions when the receptors are added. This law is mostly used when the network is learning new signals. The synaptic law explains us how the connections between neurons (synapses) are created. The law allows creating a whole network from scratch. The network is built depending just on incoming signals and genetic information.
OPCFW_CODE
How to replace an expression in several TeX files? Can I replace all instances of one string expressions in the TeX files in one directory? For example, let's say I want to replace all \frac with \myfrac in more than one file simultaneously. Alternatively, you could let the files keep using the original \frac{}{} and use \let\OldFrac\frac, and \let\frac\myfrac. And if \myfrac was originally using frac, replace the occurrence of frac with OldFrac within the definition of \myfrac. This looks very borderline for on-topic, as it's about editors/text manipulation rather than TeX. @PeterGrill's approach is TeX-based, but it's not clear that the question is rally about that method at all. Which operating system are you using? If you're using a *nix based operating system (Mac included), it's probably easiest to do this out of TeX, in the command line / terminal: find /home/my/directory -type f -exec sed -i 's/OldString/NewString/g' {} \; (using a combination of find and sed.) So you'd need: find /home/my/directory -type f -exec sed -i 's/frac/myfrac/g' {} \; Hope this helps. Thanks. As remark for others, at first it didn't work and I had to add the two symbols "" after -i as suggested here. Assume that only applies to Mac users? Mac Version 10.5.8 here, I don't know how far this reaches. Why not just sed 's/something/else/g' *.tex? Shell-agnosticism? Most advanced text editors have a replace feature that can apply to a whole set of files at once. For example, Notepad++ on Windows provides this feature. As mentionned in another response to this question, if you are using a *nix-based system, you could quite easily use the command line. If you use Vim, you can try the following: First open one of your .tex files in vim. Load the other .tex files from the same directory by opening them in hidden buffers: :args *.tex Now all the *.tex files in the directory are loaded into (hidden) buffers. You can apply a standard substitution command to each of these files by issuing the following command: :argdo %s/string1/string2/g | update The % is the range (entire file), s denotes the substitution command, string1 is found and replaced by string2 on each line, and g indicates that this is done globally (so not just the first instance on each line). The last part | update will automatically save all the files after the substitution is complete. Because the backslash is a "special" character, you need to escape it with another backslash. In your case, you would issue the command: :argdo %s/\\frac/\\myfrac/g | update
STACK_EXCHANGE
Confused - LAGG interface I have setup pfSense with 5 x gigabit nix's. Right now I am using 1 for incoming 1 for office and 1 for home , leaving 2 spare cards. Yesterday I found that when I was doing large transfers within the office it would slow down others and was hoping I would be able to use the other 2 cards to maximize performance. I ideally would like to have 1 incoming , 1 for home and 3 for the office . When I setup a LAGG interface the machines got a DHCP but when I watched the packets sent and received it came to a halt immediately after getting network config from pfsense. I could not connect to the internet and even trying to ping devices on the network failed. I will presume I need to have either computers that recognize a LAGG interface or a switch that will. I had one card going directly to a super micro server , another one going directly to my main PC and 1 going to a gigabit d-link switch which supports 2 gigabit . Any information about this would help , I can't seem to find much helpful documentation regarding LAGG interfaces for pfsense ..maybe I am not looking in the right places There are a couple of issues that you have. First you are correct in order to do a LAGG interface you will need to have a switch that understands LAGG otherwise you are going to cause a switching loop on your network which will bring it to a halt if your switch doesn't support some form of spanning-tree. Secondly if you are transferring data within a network then your packets will never touch your router (PfSense) so I'm not sure how much that LAGG group will help you. Now if you have different subnets or you are transferring data from your office Lan to the Internet then yes LAGG could make a difference but I would point to your WAN interface needing more bandwidth than your LAN not being fast enough. What a LAGG group does is allow you to have a redundant links to a device but instead of having them sitting there doing nothing until a main link goes down like what spanning-tree does it will actually load balance across those links and if one goes down you still have one remaining or how ever many you have in the group. But remember that when traffic is on the same broadcast domain then your device will use the MAC address to communicate with it so the switch will facilitate the communication. When looking for a switch, switch port speed is very important but probably even more important is the backbone of the switch. For example if you buy a 24 port Gigabit switch you want to make sure that it has a max throughput of 48 Gbps which means that every port can be communicating a full bandwidth. Sometime for instance with the example above you might see a throughput of 33 Gbps which would mean in a worst case senario your switch can't actually do 1 Gbps in/out on every port. Packets per Second is a good metric to look at too. Hope this helps. Thanks Mr.Fly ! Here is a diagram of my current internet situation. I am unsure how I will setup a LAGG interface with this setup . I have a fibre-op ISP who uses 3 VLANS , 1 is management (33) , 1 is IPTV (34) and internet comes over 35. They will not allow me to directly attach my own router without spoofing the mac address also , which is easily possible , but still I need IPTV which is connected via a coax cable. So my question here is , how can I optimize my setup given I have 2 spare network cards. creating LAGG groups to your switches in both Office and home might give you better throughput when routing from Office to Home networks but other than that I'm not sure there is much more that you can do. Routing accross interfaces is probably CPU limited. What kind of computer are using for your PfSense firewall? Do you get a public or private IP from your PPPoE modem?
OPCFW_CODE
quantized BERT model save fail Hello, Firstly, thank you for making this resource available! Learned about it via Twitter https://twitter.com/peter_izsak/status/1190896532515696640 I tried the quantized BERT training example from your docs: http://nlp_architect.nervanasys.com/quantized_bert.html The training halts without an error message. I do get an FP32 model in the output directory (pytorch_model.bin), an eval_results file, and checkpoint sub-directories, but not a quantized model (expecting quant_pytorch_model.bin) Is there something further I should do / some way to diagnose what's going wrong? thank you! Andrew Hi @cainesap Are you using the last code? git cloned and installed from there? The quantized export is not available yet if you installed using pip install nlp-architect Ah! No I'm not, thank you for the quick response, I'll try that :) I installed from github but now see an error on 1st epoch: 2019-11-06 15:59:01,601 INFO ***** Running training ***** 2019-11-06 15:59:01,625 INFO Num examples = 3668 2019-11-06 15:59:01,626 INFO Num Epochs = 3 2019-11-06 15:59:01,626 INFO Instantaneous batch size per GPU/CPU = 8 2019-11-06 15:59:01,626 INFO Total train batch size (w. parallel, distributed & accumulation) = 8 2019-11-06 15:59:01,627 INFO Gradient Accumulation steps = 1 2019-11-06 15:59:01,627 INFO Total optimization steps = 1377 Epoch: 0%| | 0/3 [00:00<?, ?it/s]Killediteration: 0%|▎ | 1/459 [01:51<14:14:24, 111.93s/it] Can I do anything else to diagnose? thank you, Andrew I installed and ran the quantized BERT training example. I encounter the following error at the outset. Any pointers would be much appreciated! Or let me know if I can provide further info.. 2019-11-06 15:59:01,601 INFO ***** Running training ***** 2019-11-06 15:59:01,625 INFO Num examples = 3668 2019-11-06 15:59:01,626 INFO Num Epochs = 3 2019-11-06 15:59:01,626 INFO Instantaneous batch size per GPU/CPU = 8 2019-11-06 15:59:01,626 INFO Total train batch size (w. parallel, distributed & accumulation) = 8 2019-11-06 15:59:01,627 INFO Gradient Accumulation steps = 1 2019-11-06 15:59:01,627 INFO Total optimization steps = 1377 Epoch: 0%| | 0/3 [00:00<?, ?it/s]Killediteration: 0%|▎ | 1/459 [01:51<14:14:24, 111.93s/it] Hi @cainesap , I guess its a backend configuration. Are you running on GPU? Can you provide any other logs to diagnose the error? Thanks @peteriz for the reply -- it prompted me to check again that I had all the right dependencies. I updated pytorch (CPU) and all is fine! Model trains and saves a quantized version :)) thanks again
GITHUB_ARCHIVE
Speed Calculator for Raid sets First my apologies if this questions has been asked. I've googled… and googled some more and can't seem to find what I'm looking for. Anyone know of a software or web based calculator that will let you plug in a RAID configuration (example below) and output expected R/W speeds, hopefully in MBs Number of disk, size, spin, type , Raid type EX. (8, 73Gb, 15k, SAS, Raid 1/0) Or EX. (6, 146Gb, 10k, FC, Raid 5) I found severely that calculate available space. Some that give some speed info, but they can't be realistic because they don’t take spin or type in the consideration. I don't know of a calcualtor that can tell you that, in part because there are so many other factors besides just the disk and connection type factors. The RAID controllers make a huge difference, as can the firmware on those controllers, the type of data, as does the ability of the motherboard to push data. Your best bet is benchmarking on your own. I can't even think of a way to write a calculator do do that sort of thing. Also I believe that probably for most operations the network will bottleneck before the RAID. There are quite a few variables that can affect speed, but here's some basic ideas to get a feel for what a given raid set should be capable of. Raw disk throughput Assuming that a random seek completes an average of 1/2 of a rotation (180 degrees) away from the sector you want, the average random access time is one average seek plus the time the disk takes to rotate 180 degrees. On a 10K RPM disk 1/2 of a rotation takes approximately 3ms. On a 15K RPM disk 1/2 of a rotation takes approximately 2ms. Average seek time for a Seagate Cheetah 15K6 is quoted at 3.5ms for reads and 3.9ms for writes (I presume the writes include a period to align the head on the servo tracks). A 10K disk is slightly longer. So, a raw estimate is an average of 5.5ms per random seek for a 15K drive and 7ms for a 10K drive. Tagged command queuing will optimise this slightly. Thus, for a 15k drive we have a theoretical random throughput of about 180 IOPS and 140 IOPS for a 10K drive. RAID-1 On a non-striped RAID-1, reads can be split between the two disks, but writes must go to both drives. Random operations will give you twice the throughput of a single disk for reads and approximately the throughput of a single disk for writes. Sequential I/O tends to peak at the maximum throughput of a single disk. Interface cables may or may not present a bottleneck. Striped RAID sets RAID-5, RAID-10 or RAID-50 disks have the data split up into chunks spread in a round-robin fashion amongst the members of the RAID set. Assuming no read-ahead optimisation a disk can read at most one stripe per revolution of the disk. A 10K disk revolves about 170 times per second and a 15K disk revolves about 250 times per second. For a 64K stripe this comes to approximately 10MB/sec per 10K disk or 15MB/sec per 15K disk. Larger stripe sizes give you better sequential throughput on the disks - for example a 256K stripe size on an array of 15K disks would give you 60MB/sec per disk. A heavily random access workload will reduce this by introducing more latency between seeks. Read-ahead on a controller might increase it. Thus, an array with 14 15K disks using 64K stripes would have a theoretical streaming throughput of around 210MB/sec assuming no other constraints. If the controller is not fast enough the practical rate may be lower (for example, I could never get a dell PV660 (Mylex DAC-FFX) to get more than one read per two revolutions of the disks). A heavily random access workload would also be somewhat slower because the disk accesses will average less than one per revolution of the disk. Some reads will also be used on parity data so the actual application data throughput would be a bit slower. Write bottlenecks The fastest possible write on a RAID-5 involves two reads and two writes. The controller has to read the old block and corresponding parity block, XOR the old and new data with the parity block to recalculate the parity and write out the new block and parity. Caching can reduce the amount of disk activity if the old block and parity block are in cache. The same applies to a RAID-50. A RAID-10 needs two disk accesses per write - one to the main and the other to the mirror. Read performance is roughly equivalent to a RAID-5. Controller bottlenecks In some cases (fibre channel is prone to this) the connections to the physical disk subsystem are of somewhat lower bandwidth than the disks are theoretically capable of delivering. Also, disk controllers can perform poorly. In many cases this is a more significant limitation than the disks themselves. High-end SAN hardware often has large multiprocessor machines as controllers - they may also have custom hardware for fast parity calculations. The controller for an EMC DMX takes up half a rack by itself - before you put any disks on it. Tuning the disk itself Caching and read-ahead parameters on the disks themselves may also affect performace for certain workloads. For example, disks using Seagate's 'V' firmware might be set up for fewer larger cache segments and agressive read-ahead to optimise for streaming throughput of media data. The same physical disk configured for use in a Clariion would be configured with more, smaller cache segments in order to support a larger number of smaller writes from many clients on a SAN. This is one damn fine piece of an answer. As always, @COTW, this is an excellent answer with great details that are hard to find elsewhere! Thanks! @MaxVernon - Thanks. I spent an ungodly amount on fibre channel kit to find that out, right down to poking about with disk and controller firmware. Now it's all in landfill - the last of it went just a few weeks ago. R.I.P. The sort of speed that you will see with vary a lot depending on the drives, the controller, and your workload, so you are not going to fine a nice easy calculator that will give good accurate+precise results. You may already realise this, but.. Besides all of the drive characteristics, the speed is going to be largely governed by the performance of any given RAID card. Which will depend not only on obvious things like it's interface (eg PCI-X). But more dramatically the quality and performance of it's chipset routines. As others have said, I don't think this can be done in the terms you've stated. I think the best you could do is work out the relative performance of different raid options i.e. treat the hardware as a constant. It would still be inaccurate, but may give some guidance. But I think you also need to consider why there are different raid configurations. One usually chooses by judging the trade-offs between performance, capacity, data protection and cost. If you're not familiar with the trade-offs, take a look at a comparison chart to see the relative merits. It sounds like performance is your main criteria here, so you probably know what raid level you want; you just need to find the best performing hardware. Here is an example : I made benchmarks with the same drives (7x750GB seagate barracuda ES2), same RAID configuration (stripe size, etc), same motherboard (Supermicro H8DMe), same CPU (dual Opteron 2214), same RAM (8GB ECC) and same operating system (Linux), same filesystem (XFS, nobarrier option) and different RAID controllers. Appreciate the results : Areca 1280 : 250MB/s write, 350MB/s read, 21000 file created/s Adaptec ASR52445 : 240MB/s write, 350 MB/s read, 18000 file created/s 3Ware 9550 : 310MB/s write, 410MB/s read, 6500 file created/s 3Ware 9650 : 440MB/s write, 410MB/s read, 4500 file created/s Of course these are the optimal results after fine-setting all software parameters for each controller (read-ahead, caching options, request queue length, request size...) by doing long and repeated benchmarks while adjusting the various knobs. One of the funny thing I discovered by careful benchmarking is that the settings are entirely different if you use Barracuda ES2 (32MB cache) and Barracuda ES (16MB cache) drives, though the top performance is about the same. Unfortunately, storage and RAID is hard. That's why you won't find an easy-to-go performance calculator. I found a calculator that will give you multipliers of speed. It boils down to JBOD: Read: 1X Write: 1X Raid 0 (Striped Set) Read: [NumberOfVolumes]X Write: [NumberOfVolumes]X Raid 1 (Mirror Set) Read: [NumberOfVolumes]X Write: 1X Raid 5 Read: [NumberOfVolumes-1]X Write: N/A Dependent on the controller Raid 10: (Mirror of striped sets 4 drives) Read: [4]X Write: [2]X If those calculators exist, they will be on the vendor web-sites. So many things can affect throughput speeds that a simple calculator would be pretty worthless. Especially for any RAID that includes parity, as those tend to bottleneck more on the RAID Controller's CPU than anything else. The best you'll find is, "rule of thumb, your mileage may vary," type estimators. There's a lot more involved to the speed than the underlying raid layout so I doubt you'll find such a calculator. Things that make can make a difference: Raid Type hardware software fakeraid (raid controller that offloads the xor checksum calcs to the cpu) What type of bus is the controller in...what is it sharing this bus with. Most desktop class motherboards share PCI buses with multiple slots. File system type, block size, and it's alignment with the chunksize on the underlaying raid also come into play. Drive type, rotational speed, cache size Finally the workload will also interact with all of these things. So the more important question is actually what disk and raid layout are a good match for your workload and data availability goals.
STACK_EXCHANGE
This module performs common GIT tasks by calling git as a remote process through process_create/3. It requires that the git executable is in the This module started life in ClioPatria and has been used by the Prolog web-server to provide information on git repositories. It is now moved into the core Prolog library to support the Prolog package manager. - git(+Argv, +Options) is det - Run a GIT command. Defined options: - Execute in the given directory - Unify Out with a list of codes representing stdout of the command. Otherwise the output is handed to print_message/2 output(Out), but messages are printed at level - Export GIT_ASKPASS=Program - git_process_output(+Argv, :OnOutput, +Options) is det - Run a git-command and process the output with OnOutput, which is - git_open_file(+GitRepoDir, +File, +Branch, -Stream) is det - Open the file File in the given bare GIT repository on the given - - We cannot tell whether opening failed for some reason. - is_git_directory(+Directory) is semidet - True if Directory is a git directory (Either checked out or - git_describe(-Version, +Options) is semidet - Describe the running version based on GIT tags and hashes. - Only use tags that match Pattern (a Unix glob-pattern; e.g. - Provide the version-info for a directory that is part of - Describe Commit rather than - See also - - git describe - git_hash(-Hash, +Options) is det - Return the hash of the indicated object. - git_ls_tree(-Entries, +Options) is det - True when Entries is a list of entries in the the GIT repository, Each entry is a term: object(Mode, Type, Hash, Size, Name) - git_remote_url(+Remote, -URL, +Options) is det - URL is the remote (fetch) URL for the given Remote. - git_ls_remote(+GitURL, -Refs, +Options) is det git ls-remote against the remote repository to fetch references from the remote. Options processed: For example, to find the hash of the remote HEAD, one can use Refs = ['5d596c52aa969d88e7959f86327f5c7ff23695f3'-'HEAD']. |Refs||- is a list of pairs hash-name.| - git_remote_branches(+GitURL, -Branches) is det - Exploit git_ls_remote/3 to fetch the branches from a remote repository without downloading it. - git_default_branch(-BranchName, +Options) is det - True when BranchName is the default branch of a repository. - git_branches(-Branches, +Options) is det - True when Branches is the list of branches in the repository. In addition to the usual options, this processes: - Return only branches that contain Commit. - git_tags_on_branch(+Dir, +Branch, -Tags) is det - Tags is a list of tags in Branch on the GIT repository Dir, most recent tag first. - See also - - Git tricks at http://mislav.uniqpath.com/2010/07/git-tips/ - git_shortlog(+Dir, -ShortLog, +Options) is det - Fetch information like the GitWeb change overview. Processed - Maximum number of commits to show (default is 10) - Git revision specification - Only show commits that affect Path. Path is the path of a checked out file. - Similar to path, but Path is relative to the repository. |ShortLog||- is a list of | - git_show(+Dir, +Hash, -Commit, +Options) is det - Fetch info from a GIT commit. Options processed: - GIT option on how to format diffs. E.g. - Truncate the body at Count lines. |Commit||- is a term | git_commit(...)-Body. Body is currently a list of lines, each line represented as a list of The following predicates are exported, but not or incorrectly documented. - git_commit_data(Arg1, Arg2, Arg3) - git_log_data(Arg1, Arg2, Arg3)
OPCFW_CODE
Main / Productivity / Orwell dev c mac Orwell dev c mac Name: Orwell dev c mac File size: 828mb 29 Nov A new and improved fork of Bloodshed Dev-C++ I am using the Orwell version of Dev-C++ and it works very well and NOT only for 30 lines of. Find the best programs like Dev-C++ for Mac. More than 4 alternatives to choose: Eclipse, NetBeans, Top Alternatives to Dev-C++ for Mac. Dev-C++. The official site of the Bloodshed Dev-C++ update, which is fully portable, and optionally ships with a 64bit compiler. Posted by Orwell at PM. 19 Apr Explore 14 Mac apps like Orwell Dev-C++, all suggested and ranked CodeLite is an open-source, cross platform IDE for the C/C++/PHP and. I need to download Dev C++ but it is only available for windows pc. is These tools are the Mac equivalent of the "Dev C++" tools you were. From orwelldevcpp: Orwell Dev-C++ is a full-featured Integrated Development Environment (IDE) for the C/C++ programming language. It uses Mingw port of. 29 Aug 0. In Windows I use Dev-C++. You can download it from the official blog: http:// fi-test.com 29th August , PM. 19 Aug Orwell Dev C++ free download. Get the latest version now. Orwell Dev-C++ is a Integrated Development Environment (IDE) for the C/C++. You can view the whole list here at Top 9 Best C/C++ IDEs For Windows/Mac OS X/Linux/Unix. k .. Even if you are considering it, install Orwell Dev-C++. Dev-C++ 5 (currently beta). [Screenshot] Bloodshed Dev-C++ is a full-featured Integrated Development Environment (IDE) for the C/C++ programming language . I searched for Mac (OS X) alternatives for Bloodshed Dev C++, and got this page. Of the top results, Xcode (Free by Apple), may be the one. DEV files and view a list of programs that open them. Programming project created with Dev-C++, an open-source Integrated Orwell Dev-C++ Mac. Dev-C++ for Mac has not been released by Orwell so far, so you can't use it if you switch to Mac. However, there are many C/C++ compilers that can easily. 21 Jul An integrated environment of development (IDE) into C/C ++, Dev-C ++ has compiler based on Mingw of GCC, but it can also be used with. (mainly UNIX/Linux, mac version in macports -below). gnuplot: (free Orwell Dev -C++: another GUI front end for gcc under windows (I have not tested this one.
OPCFW_CODE
There are so many phrasal verbs in the English language and each one of them has a different meaning. Some of these phrasal verbs have several meanings. - To take care of and teach ( a child who is growing up ) Example : His grandparents brought him up because his parents were always busy. - To mention ( something ) when talking : to start to talk about ( something ). Example : Don’t bring up the bad news again, please ! - To continue to do what you have been doing. Example : Sorry I interrupted you, carry on talking ! - To behave or speak in an excited or foolish way. Example : The little boy was carrying on : shouting and kicking all day long. - To meet or find ( something or someone) by chance. Example : While Adam was walking in the street, he cam across Tom, what a coincidence ! Come up with - To get or think of ( something that is needed or wanted) Example : We finally came up with a solution to the problem ! Fall apart : - To break into parts in usually a sudden and unexpected way. Example : My cake fell apart when I tried to cut it. - To become unable to live in a normal way because you are experiencing a lot of confusion or emotional pain. Example : After the divorce, she felt apart. Get along : - To be or remain friendly Example : We’re not together anymore, but we get along great. - To make progress while doing something. Example : How are you getting along at playing the guitar ? - To leave a place Example : It was lovely to see you, but my friend has to get along, she has class. - To become old. Example : Her grandma is getting along, she’s almost 89. Get away : - to go away from a place. Example : I can’t wait to get away from Casablanca, it became toxic. - To avoid being caught : to escape. Example : The thieves managed to get away in a stolen car. Get over : - To stop being controlled or bothered by ( something, such as a problem or feeling ) Example : I got over my fear of flying - To stop feeling unhappy about something. Example : Finally, Kylie got over her ex husband. - To become healthy again after an illness. Example : Have you heard ? Reda has got over the cold. Give up : - To stop an activity or effort : to admit that you cannot do something and stop trying. Example : We all gave up smoking last week. Go on : - To continue. Example : They landed in Marrakech and then went on to Rabat. - To happen Example : What’s going on ? What’s happening ? - Used in speech to urge someone to do something Example : Go on ! Try it ! it’s delicious. Hold on : - To have or keep your hand, arms, etc., tightly around something. Example : Hold on to the tree, that way you won’t fall. - To succeed in keeping a position, condition, etc. Example : I will hold on to my job until May Look after : - To take care of ( someone or something). Example : the nurse looked after the patient for months, until he was better. Look forward to : - To expect ( something ) with pleasure Example : William is really looking forward to going on holiday. Make out : - To hear and understand (something) Example : I can’t make out what you’re saying, can you speak louder ? Pass out : - To fall asleep or become unconscious Example : I was so tired, I got home and passed out on the bed. Put down : - To place ( someone or something that you have been holding or carrying) on a table, on the floor, etc. Example : You can put the suitcases down in the bedroom. - To write ( something ) : to record ( something ) in writing. Example : He put down his memories to write a book when he was older. - To give ( an amount of money) as a first payment when you are buying something that costs a lot of money. Example : My wife and I are going to put down some money to buy this car. Put off : - To decide that ( something ) will happen will happen at a later time : postpone. Example : The boss was so tired that he put off the meeting until Tuesday. Turn up : - To be found usually unexpectedly Example : Oh ! my phone turned up in my pocket ! - To arrive at a place Example : As always, julian turned up late. - To increase the volume, temperature, etc., of something by pressing a button….. Example : Please turn the music up 😉
OPCFW_CODE
One thing to see is userIds, type comes from req.body while userId that’s are aliased as chatInitiatorId is coming from req by way of our decode middleware. If you keep in mind, we affixed app.use(“/room”, decode, chatRoomRouter); within server/index.js document. Meaning this path /room/initiate try authenticated. Thus const < userId:>= req; is the id regarding the current user signed in. We simply phone the initiateChat technique from ChatRoomModel and pass it allUserIds, kind, chatInitiator . Whatever result comes we simply pass they into user. However before we build a note we need to develop a model in regards to our chatmessages . Thus why don’t we accomplish that very first. In your systems folder make a unique file called ChatMessage.js and include this amazing contents to they: - We a MESSAGE_TYPES item that has only 1 sort labeled as text - Our company is determining all of our outline for chatmessage and readByRecipient - Next we have been writing our static means for createPostInChatRoom I understand this is exactly plenty of content material, but simply carry with me. Let’s simply write the controller when it comes to route that brings this information. When it comes to path explained in our routes/chatRoom.js API (‘/:roomId/message’, chatRoom.postMessage) let’s visit the controller in controllers/chatRoom.js and determine it: Read dialogue for a chat room by dating4disabled it’s id [Get request] For the path .get(‘/:roomId’, chatRoom.getConversationByRoomId) in routes/chatRoom.js open its operator in file controllers/chatRoom.js and include the following material towards cam space: And then at last, go to your ChatMessage model in models/ChatMessage.js and create a fixed strategy also known as getConversationByRoomId : Mark an entire talk as read (ability similar to WhatsApp) After the other individual are signed in and additionally they look at a conversation for a-room id, we should instead mark that discussion as review off their part. All we have been performing here is very first examining if the area is available or otherwise not. Whether it do, we continue furthermore. We consume the req.user.id as currentLoggedUser and pass it on the preceding features: A potential use situation is that the consumer might not have browse the latest 15 messages whenever they open up a particular space dialogue. They ought to all be marked as read. Therefore we’re making use of the this.updateMany work by mongoose. This states i do want to find the information articles in chatmessages range in which chatRoomId suits and readByRecipients selection will not. The userId that I am driving to the features was currentUserOnlineId . Next we should tell mongoose to not simply modify one record it locates, but additionally to revise all of the documents in which the disease fits. So doing this: Get in touch with myself on twitter together with your feedback a€“ i might like to listen when you yourself have any recommendations for modifications: twitter/adeelibr Any time you preferred to the article, please carry out provide the github repository a superstar and subscribe to my youtube channel. Figure out how to code at no cost. freeCodeCamp’s available origin program possess assisted significantly more than 40,000 group bring opportunities as designers. Get going The mission: to help individuals learn to code 100% free. We attempt by creating hundreds of clips, reports, and entertaining coding training – all free on the community. We also have tens of thousands of freeCodeCamp study organizations internationally. Within root folder establish an innovative new folder called config . Inside that folder establish a file also known as index.js and add these information: Here we’re telling they that firstName was of kind sequence. If the individual forgets to include this benefits while showing up in API, or if perhaps the kind is not sequence, it’ll toss an error. We become the userId from your req.params . Should you recall through the video early in the day, req.params will be the /: identified in our paths point. We’re going to need that user id additionally the client id (an individual’s own unique outlet id that brings about whenever they create an association with the help of our make). We are with the make-validation library right here to confirm an individual’s request. The start API, we count on an individual to transmit several users as well as define the type of the chat-room this is certainly becoming created.
OPCFW_CODE
PHP session values not set I'm having some strange issues with PHP session variables claiming to not be set. I'm only encountering this in one particular situation: My site has a 3-step wizard and I use sessions to store the user's selections on each step. To start the wizard, I use an init script that ensures any old wizard session data is wiped out - this init script then redirects the user to step 1. For example: // Initialize wizard session and send user to step 1 $_SESSION['wizard'] = array(); $_SESSION['wizard']['step1'] = TRUE; session_write_close(); header('Location: http://mysite.com/wizard/step1.php'); Then at the top of step1.php, I do a check like: if (!isset($_SESSION['wizard']['step1'])) throw new Exception('Step1 not initialized'); When the user submits the step1 form, it is posted back to itself for validation. If it passes, another redirect is done to step 2. Most of the time, this works fine. In fact, the init script always works and the step1 form always loads without a problem. But sometimes, after submitting the step 1 form, the 'Step1 not initialized' exception gets thrown. I don't see how the initial load could pass the check but the form post fail it moments later. Especially considering this problem happens infrequently and most of the time there are no problems at all. I am using a database to store my session data and I don't think this is due to session timeouts or garbage collection - some related php.ini values: session.use_cookies = 1 session.cookie_lifetime = 0 session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 86400 Does anyone know what could be causing such a problem? Any insight would be greatly appreciated. Thanks, Brian I notice you posted your init, with the exception of session_start(); Is it safe to assume this has been included in your code? At the top of every script I call session_start() if I find a session cookie, ie. if (isset($_COOKIE['PHPSESSID'])). Perhaps that's incorrect? The vast majority of the time things work without a problem.. I'm stuck as to why I can't replicate it intentionally. Can you post your session_start() code in your question? It might be significant to diagnosing and solving the problem. Ok, I've just removed the Cookie check I mentioned above after reading this post. We'll see if I continue to have the problem. The code was simply if (isset($_COOKIE['PHPSESSID'])) session_start(); If not the entire session is empty, but just that variable/key, you can use this to track the reason: class foo extends ArrayObject{ function __destruct(){ echo 'dying:'; debug_print_backtrace(); } } session_start(); $_SESSION['wizard'] = new foo(); //array access is still possible $_SESSION['wizard']['foz'] = 1234; //reading it like an array also echo $_SESSION['wizard']['foz']; //on normal completion, it also gets called, the backtrace would be: //dying:#0 foo->__destruct() //^ ignore those //on overwriting / deleting values, like for instance this by accident: $_SESSION['wizard'] = array(); //the backtrace is something like: //dying:#0 foo->__destruct() called at [filename:linenumber] ... and you'll have a filename+linenumber Possibly write it to a temporary file rather then echo'ing it to make sure you don't miss stuff on redirects etc. Have verified that this is not a session_start() problem. Still unsure what's going on but this is clever advice for continued troubleshooting. Make sure that every script that uses your sessions begins with session_start() Did you remember to call session_start() before interacting with $_SESSION and also before any output has be sent to the browser (including any white space or blank lines before <?php? Yes, it's called at the top. I'll triple-check the whitespace issues - but if these were the cause, wouldn't it break every time and not so infrequently? And wouldn't I get a PHP warning? @Brian yes you'd get a warning - I missed the "infrequently" part
STACK_EXCHANGE
Python how to get function formula given it's inputs and results Assume we have a function with unknown formula, given few inputs and results of this function, how can we get the function's formula. For example we have inputs x and y and result r in format (x,y,r) [ (2,4,8) , (3,6,18) ] And the desired function can be f(x,y) = x * y try using some genetic algorithms that evolves a grammar searching for the desired formula ;) What if there is infinite number of satisfying formulas? If you solve this i think 100 million people will be interested. Isn't this just a system of equations? Perhaps you should take a linear alegbra or differential equations course. In any case, this does not deserve a python tag I would say it's a duplicate of this one: http://stackoverflow.com/questions/36466283/an-algorithm-to-generate-the-next-element-from-a-sequence-by-finding-a-patter/36466936/ @DeepSpace one of the satisfying formulas is enough. @sinabakh still impossible. Another possible answer: f(x, y) = (y ** 2) / 2. As you post the question, the problem is too generic. If you want to find any formula mapping the given inputs to the given result, there are simply too many possible formulas. In order to make sense of this, you need to somehow restrict the set of functions to consider. For example you could say that you're only interested in polynomial solutions, i.e. where r = sum a_ij * x^i * y^j for i from 0 to n and j from 0 to n - i then you have a system of equations, with the a_ij as parameters to solve for. The higher the degree n the more such parameters you'd have to find, so the more input-output combinations you'd need to know. Variations of this use rational functions (so you divide by another polynomial), or allow some trigonometric functions, or something like that. If your setup were particularly easy, you'd have just linear equations, i.e. r = a*x + b*y + c. As you can see, even that has three parameters a,b,c so you can't uniquely find all three of them just given the two inputs you provided in your question. And even then the result would not be the r = x*y you were aiming for, since that's technically of degree 2. If you want to point out that r = x*y is a particularly simple formula, and you would like to look for simple formulas, then one approach would be enumerating formulas in order of increasing complexity. But if you do this without parameters (since ugly parameters will make a simple formula like a*x + b*y + c appear complex), then it's hard to guilde this enumeration towards the one you want, so you'd really have to enumerate all possible formulas, which will become infeasible very quickly.
STACK_EXCHANGE
Leaked Objects: ALasset and ALAssetPrivate I am using Profile to find any memory leaks. I found 2 interesting leaks, which i can't understand: Leaked Object | Responsible Library | Responsible Frame ALAsset AssetsLibrary [ALAssetsGroup _enumerateAssetsAtIndexes:options:usingBlock:]_block_invoke_0125 ALAssetPrivate AssetsLibrary -[ALAsset initWithManagedAsset:library:] Is is my problem or AssetsLibrary? Are there any ideas how to fix this? The problem lies in the Asset library itself. It contains a memory leak. Evidence is that the following code already shows a leak in the profiler (notice that I commented out the line where I added the asset to a mutable array): [assetGroup enumerateAssetsUsingBlock:^(ALAsset *result, NSUInteger index, BOOL *stop) { if(result == nil) { *stop = YES; } else { //[theAssets addObject:result]; } }]; A possible fix would be to check the retain count of the ALAsset pointer and release it yourself an extra time if the retain count is > 1 (it should be 1 at the end of the block if you have not retained it yourself). EDIT: I noticed the leak is actually an ALAssetPrivate object which is over-retained by ALAsset, the retain count of the ALAsset instance is correct. EDIT: Stupid me, the memory leak was actually caused by a category I implemented on ALAsset which included a dealloc method of itself. This was the cause of the leak. Is is my problem or AssetsLibrary? Are there any ideas how to fix this? Highly likely leaks are caused by your own code. The fact that Responsible Frame shows ALAsset only means that the memory was allocated in that library. But if you are the owner of that memory, you are responsible of the leak. As to how to fix it, first of all, give a try to the static analyser in Xcode. That helps sometimes. If it doesn't, the review how you use the AssetsLibrary or any intermediate framework that you are using to access it. Check all you properties, and each call to alloc/init or convenience constructors. If you have no clue about where the leak might be produced, a useful technique is commenting out blocks of code selectively (of course, you should this in a sensible way, so that the app can run and not crash) and check again against Instruments until the leak disappears (in which case you know what was causing it). So i can't know where exactly is the leak, i see. Thank you So commenting all blocks i shall find the proper block. Also i do not understand what is the assetprivate. Is it retained with asset or what it is? I never use such object. Yes, you could go by "try and check", one block at a time. Also: top down, by commenting out a large chunk, then going deeper. AssertPrivate is some object that some ALAsset instance is owner of; if you leak the latter, the former is also leaked. glad to have helped. hope you can find it quickly. If you are using ARC, look specifically for cyclic dependencies (e.g., using self or a property within a block, etc.)
STACK_EXCHANGE
Evaluating functors inside a functor in Prolog I follow the book Problem solving with Prolog by John Stobo. I've studied the Chapter 1 (Programming with Facts) and Chapter 2 (Programming with Rules) Now I am at Chapter 3: Recursion in Rules and I'm practicing the program given in the Section 3.1. I've elobarated the program a bit (without changing the main structure) and added my own functor (or function or rule ?) named is_rank_lower/2 but it doesn't work as expected. When I enter (or ask Prolog) is_rank_lower(ryan, jondoe). the output is false. the expected output: true. Because ryan is a private and jondoe is a corporal and private is lower in rank than corporal. The explanations are in the code. Question #1: How to make my own functor is_lower_rank work as expected? Question #2: This question might be related to the book because when I write down the program exactly as it is, it works slightly wrongly and that might be causing my own functor to function wrongly, too. Just a guess. When I enter: lower_rank(private, corporal). Prolog returns with true and waits at it, I have to put a dot after the true and click enter only then does it return to the ?- prompt. The expected output is: return with true. then return to the ?- prompt The author seems to talk about this problem. In page 57 he writes "lower_rank would not terminae if the goal ought to fail" I've applied all the instructions but the functor still doesn't work. How to make it work? My prolog version swi-prolog 7.2.0 % John Stobo, problem solving with Prolog, March.1989 % FACTS: next_degree(private, corporal). next_degree(corporal, sergeant). next_degree(sergeant, lieutenant). next_degree(lieutenant, captain). next_degree(captain, major). next_degree(major, "lieutenant colonel"). next_degree("lieutenant colonel", colonel). next_degree(colonel, "brigadier general"). next_degree("brigadier general", "major general"). next_degree("major general","lieutenant general"). next_degree("lieutenant general", general). soldier(ryan, private). soldier(jondoe, corporal). sooldier(smartson, captain). % RULES: lower_rank(R1, R2) :- next_degree(R1, R2). lower_rank(R1, R2) :- % this works but if next_degree(R1, R3), % the result is "true" lower_rank(R3, R2). % it doesn't end properly % only if the user types a dot, it ends properly is_rank_lower(A1,A2) :- lower_rank(soldier(A1,X), soldier(A2,X)). % doesn't work because the functors are inserted as % 'soldier(ryan, _G1471), soldier(jondoe, _G1471)) % not as private, corporal, i.e. they are not evaluated Just want to mention: if your issue is with unwanted choicepoints, rather than actual logical errors, then my advice is: simply don't worry about them. As you become more familiar with Prolog, you'll learn how to handle them (using e.g. once/1) . @brebs thank you for the comment. Actually I don't plan to use Prolog for a long time. I 'm interested in it mainly out of curiosity. @GuyCoder yep I was mixing strings with atoms because the SWI didn't complain when I entered those strings with the intention to be used as variable or better say, atoms. @GuyCoder I've just tried compare(Oder, 'a b', "a b"). and it returned with Oder = (>). I think it give the priority rank according to some criteria. That next_degree/2 seems like a bizarre method - is the book suggesting it as sensible, or as an example of what not to do? There are decent books at https://swi-prolog.discourse.group/t/useful-prolog-references/1089 This works: % First argument is an atom, hence single quotes in swi-prolog rank_order(private, 1). rank_order(corporal, 2). rank_order(sergeant, 3). rank_order(lieutenant, 4). rank_order(captain, 5). rank_order(major, 6). rank_order('lieutenant colonel', 7). rank_order(colonel, 8). rank_order('brigadier general', 9). rank_order('major general', 10). rank_order('lieutenant general', 11). soldier(ryan, private). soldier(jondoe, corporal). % Not mis-spelled as "sooldier" soldier(smartson, captain). rank_lower(RankLower, RankUpper) :- rank_order(RankLower, RankLowerOrder), rank_order(RankUpper, RankUpperOrder), RankLowerOrder < RankUpperOrder. soldier_rank_lower(SoldierLower, SoldierUpper) :- soldier(SoldierLower, RankLower), soldier(SoldierUpper, RankUpper), rank_lower(RankLower, RankUpper). Results in swi-prolog: ?- rank_lower(private, corporal). true. ?- soldier_rank_lower(ryan, jondoe). true. ?- soldier_rank_lower(L, U). L = ryan, U = jondoe ; L = ryan, U = smartson ; L = jondoe, U = smartson ; false. 2nd attempt: rank_next(private, corporal). rank_next(corporal, sergeant). rank_next(sergeant, lieutenant). rank_next(lieutenant, captain). rank_next(captain, major). rank_next(major, 'lieutenant colonel'). rank_next('lieutenant colonel', colonel). rank_next(colonel, 'brigadier general'). rank_next('brigadier general', 'major general'). rank_next('major general', 'lieutenant general'). rank_next('lieutenant general', general). soldier(ryan, private). soldier(jondoe, corporal). soldier(smartson, captain). rank_lower(RankLower, RankUpper) :- rank_next(RankLower, RankLower1), % Increase lower to eventually meet with upper ( RankLower1 = RankUpper ; rank_lower(RankLower1, RankUpper) ). soldier_rank_lower(SoldierLower, SoldierUpper) :- soldier(SoldierLower, RankLower), soldier(SoldierUpper, RankUpper), % Won't have multiple answers once(rank_lower(RankLower, RankUpper)). This makes the following deterministic: ?- soldier_rank_lower(ryan, jondoe). true. ... whilst keeping the generality of rank_lower/2, i.e.: ?- findall(L-U, rank_lower(L, U), Pairs), length(Pairs, Len). Pairs = [private-corporal,private-sergeant,private-lieutenant, ... Len = 66. Thank you for the answer. However, it only answers / corrects the second question: The double quotes might really be the reason SWI prompt not returning properly. When I replaced all double quotes with single quotes, SWI prompt started to return properly. I've corrected the typo at the soldier too. Now the Question #2 is asnwered but the main question is not. I'd like to make the functor is_lower_rank work without rewriting the whole code i.e. the main body of code should stay as it is in the book. Soon I'll add the question a bounty of 250 to get this message through. Extended answer, should be acceptable now.
STACK_EXCHANGE
Data variety has been the Achilles’ heel of enterprise BI and analytics projects since long before the three V model of Big Data brought it into the spotlight (along with its siblings ‘volume’ and ‘velocity’). Over the past 20 years, my career in technology has spanned data mining, ETL, Agile BI, and other data-intensive application areas. Throughout those two decades, the challenge of data variety has remained stubbornly entrenched. Corporate visions of data-driven digital transformation still get knocked down to earth by the reality of data integration challenges, and the only way out seems to be through the interminable IT backlog–which, for practical purposes, often means no way out. Before the dot-com crash of 2000, I was with Torrent Systems, working with an internet search pioneer customer to apply market basket analysis to improve internet search suggestions. The premise was straightforward: associate a user profile and a narrow window of search history with click-through, and cross-sell the high click-through items for related profiles and searches. There were the normal challenges of re-constructing sessions from web server logs, and identifying click-through (“conversion”). But the client had several web properties, and needed to connect users and sessions across those properties. Of course, the web logs were not synchronized, nor were user accounts (when we had them); and the data was dirty, since sessionization and conversion detection are imperfect. As the project gained momentum, there was one data mining expert building the model, and three of us working full-time to get all of the data to line up. After an enormous investment of time and energy, we did get a system working, but it wasn’t enough to weather the dot-com crash. Data variety should have been listed as a contributing cause on the death certificate. Ten years later, I was with Endeca, working on an Agile BI proof-of-concept with a large automobile manufacturer. The idea was to show ad-hoc analytics across manufacturing, marketing, sales, and service, without IT having to go and build OLAP cubes and hand-tune a BI system for a carefully crafted set of queries. The technology worked really well, and we had very talented pre-sales engineers, so we were able to get from data to visualizations within a week or so. For the first time, they were able to visualize their data without it first going through the hands of IT. What they saw was that, across organizations, the dimensions didn’t line up, identities weren’t reconciled, values weren’t conformed, and the resulting charts served primarily to illustrate the diversity of their data, rather than direction for their business. So their first reaction was, ‘wow, my data is a mess! I need to kick off a year-long data cleaning project so I can do Agile BI!’ Which kind of flies in the face of the whole notion of ‘Agile’. In early 2013, when I started talking to Mike Stonebraker and Andy Palmer about their vision for machine-assisted, bulk data unification, my immediate response was, “if this technology can actually do what you say it does, I want in.” As soon as I saw it in action, I became a believer. Whether the challenge is to bridge operational systems like ERP or CRM, or rationalize data dimensions from multiple departments or divisions, this approach of bringing the data, the subject matter experts, and machine learning into direct collaboration enables rapid insight in a manner that is both value-driven and data-driven. I wish I could bring this approach back to tame the data variety challenges that made those old projects limp along, so that rather than focusing so much on the mechanics of data processing, we could instead see rapid progress towards the insights we so urgently needed.
OPCFW_CODE
Are career questions OK if both the question and answers contain generally applicable information? I asked a question about joining a startup as a grad, specifically about the scenario where there wouldn't be any more senior programmers working above the grads: https://softwareengineering.stackexchange.com/questions/214306/as-a-soon-to-be-grad-will-my-skills-and-or-career-suffer-if-i-join-a-startup-wh It was closed as off-topic (career advice), but I think the question of whether grads should join startups where there might not be technical leadership is one that could apply to lots of people, and the information in the accepted answer are definitely general enough to be helpful to future readers. Thus the closure of the question seems to go against the 'rule of thumb' from this officially cited answer: Are Career Advice questions useful to anyone except the poster? Would it help if I edited the question remove most of the information that is specific to my situation, making it even more generally applicable? Two final things: I flagged the question for mod attention and was told to come here. Also I'm not new to Stack Exchange, I'm an established user on Stack Overflow, but I wanted to ask this programmers question with a throwaway, seeing as I'm expressing concern about a potential future employer. I think the closure of this question is correct. Not that the question itself is bad, but I think it does not pass the test: "Would the answer to the question be materially different if a non-programmer answered it?" Rephrasing your question to a non-programmer one: "As a soon-to-be-grad, will my skills and/or career suffer if I join a startup where I'll be the (equal) senior-most developer/administrator/engineer/accountant/lawyer?" I work with HR and I guess the answer to this (generic) question should be the same as the answer to your question. Fair enough. I'm still not sure I agree, because I think the answer to the question varies greatly depending on the field. For example, it might be OK as a programmer, but not as an administrator, etc. But clearly the majority disagrees with me, so I'm happy to admit defeat! :) To improve your question's chance of survival, I'd remove the word "career" everywhere it occurs and focus on skills. I'd also remove the career-development tag. When reviewing answers, keep in mind that no one really knows. I'm sure there are plenty of examples of heavy-duty programmers who pulled themselves up by their bootstraps without more senior coworkers. (The Google founders, early Facebook devs, etc...?) After all, our industry is obsessed with youth and it's companies like these that serve as the archetypes. On the other hand, a more skilled and enthusiast coworker isn't going to make you dumber. (Both Google and Facebook didn't waste any time hiring the big-brain PhDs and hackers once they gained some traction.) To improve this meta question's chance of survival, I'd make it specifically about the validity of your question, not all career questions and answers. If you keep it general, I imagine it might be closed as a duplicate. No, career questions are off topic precisely because they do not generalize well. Giving a good career recommendation requires lots of personal information (career goals, skills, education, work eligibility, location, income requirements, experience, etc.). Even if we gave the perfect recommendation for you, that may not be helpful to others. If we tried to give a general answer, the recommendation may not be useful to you because it doesn't consider your unique situation. Also, more general career questions tend to attract answers based almost entirely on opinions rather than facts, references, and specific expertise. Did you even read my question? I wasn't looking for a career recommendation. In fact, I even explicitly said in my question that I wanted an answer to one specific concern (skill growth without a senior dev), and to ignore all other issues. @user104787 Yes I read your question on the main site. I interpreted your meta question "Are career questions OK if both the question and answers contain generally applicable information?" to be about career questions in general, not just your specific question. I think the closure of this question is correct. Not that the question itself is bad, but I think it would have a better home over at The Workplace. Without wanting to be a dick about it, could you explain why? To me it passes the 'will this be useful for others' test, and it also passes the 'is it specific to programming' test. I get that The Workplace is designed to cater to a lot of the questions asked here that apply to lots of professions, rather than being programming-specific, but I think that mine is programming-specific. I'm asking specifically about small startups for grad programmers, which I think is a different answer to the general question of grads in any field working at small or startup companies. @user104787: My reasoning was the same as Hbas outlined in his answer, but he wrote it down much better.
STACK_EXCHANGE
run commands as administrator with conditions with specifications I wanted to run this cmd command as administrator sleep -m 500 So I used this command. powershell -Command "Start-Process sleep.exe -m 500 -Verb runas" Error appeared: Start-Process : A parameter cannot be found that matches parameter name 'm'. At line:1 char:47 + Start-Process C:\Windows\System32\sleep.exe -m <<<< 500 -Verb runas + CategoryInfo : InvalidArgument: (:) [Start-Process], ParameterBindingException + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.PowerShell.Commands.StartProcessCommand Then, I used this command powershell -Command "Start-Process 'sleep.exe -m 500' -Verb runas" Error appeared: Start-Process : This command cannot be executed due to the error: The system cannot find the file specified. At line:1 char:14 + Start-Process <<<< 'C:\Windows\System32\sleep.exe -m 500' -Verb runas + CategoryInfo : InvalidOperation: (:) [Start-Process], InvalidOperationException + FullyQualifiedErrorId : InvalidOperationException,Microsoft.PowerShell.Commands.StartProcessCommand Then, I used this: powershell -Command "Start-Process sleep.exe /m 500 -Verb runas" Error appeared: Start-Process : A positional parameter cannot be found that accepts argument '500'. At line:1 char:14 + Start-Process <<<< C:\Windows\System32\sleep.exe /m 500 -Verb runas + CategoryInfo : InvalidArgument: (:) [Start-Process], ParameterBindingException + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.StartProcessCommand Can anyone tell me the correct command for it? I want it to be on batch version. powershell help start-process Powershell and batch are completely different languages Yes I know, I just can't figure it out how can I run sleep -m 500 as administrator using powershell.. Yea, I know how to run it as admin by using -verb runas but how can I do the command I want to run (sleep -m 500)? That's all I need then problem is solved. I figured it out. The command is this: powershell -Command "Start-Process -FilePath C:\Windows\System32\sleep.exe -ArgumentList -m,500" Thanks, ACatInLove for making me see powershell help start-process :P If you're looking for a more PowerShell native function, use Start-Sleep nope I want it for any program :)
STACK_EXCHANGE
Join Gini von Courter for an in-depth discussion in this video Slack versions, part of Learning Slack. - [Instructor] If you go to slack.com and click Pricing, then you will end up in a pricing guide where you can see information about the different features that come with the three major versions of Slack, although there is a fourth version. We'll talk more about that in a moment. The free version is for small teams or simply for folks who want to try out Slack for evaluation. And if you scroll down, you'll see that the last 10K of your team's most recent messages are searchable with free Slack, that you can integrate up to 10 third-party or custom integrations, that you have mandatory two-factor authentication, that your voice and video calls are only one-on-one, a total of five gigabytes of storage to split between anybody, and we have standard support. Now, once you begin paying for Slack, that would be Standard, for example, you have the ability to have unlimited messages searched. You have access to shared channels, to the ability to add guests who aren't members. You can have single sign-on, create custom user groups, have 15 participants on a voice or video call, and include screen sharing. Immediately, we get 10 gigabytes per team member. In other words, the amount of storage that a free group shares is doubled for each and every member, and you get more support. If you want more than that, particularly if you want to have 24/7 support with quick response, single sign-on, and so on, then you might take a look at the Plus version, which is $12.50 per user per month. And there's also yet another plan, which is Slack Enterprise, which is for a number of interconnected Slack workspaces throughout your entire company, and you don't simply get to buy this. You have to have a consultation with Slack to make sure that it's exactly what it is that you need. In this course, I'm going to use the free version of Slack. So the features that I will demonstrate will be available in every single version of Slack. But at the end of the course, I'll also show you how to upgrade your free Slack workspace to the Standard version, a logical choice if you want to be able to search unlimited numbers of messages or have more than 10 app integrations or be able to have voice and video calls with more than one person and additional file storage. One more thought, Slack is an evolving product, so it's not only possible but very likely that at some point your screen in Slack will look different than mine does. Don't let it worry you. The good news is that this almost always happens because some exciting new feature has been added to Slack, and you have access to it. - Identify how to login to Slack workspaces. - Distinguish different types of Slack channels. - Describe methods of setting your status. - Explain uses of different types of channels and Slack communications. - Choose the best way to communicate across the company. - Describe where communications are stored in Slack. - Explain the relationship between messaging and channels. - Compare and contrast storage methods in Slack and storage apps. - Identify workspace creation and configuration tasks.
OPCFW_CODE
One of the first things I do with any new Windows computer is to get the Desktop working the way that I wanted. This was one of the major reasons I stayed away from Windows 8 until Windows 8.1 was pre-installed. Fortunately, Windows 8.1 Update was promptly available after I bought my notebook, so I was able to get its additional enhancements for Desktop users. As I previously wrote, I added a Windows Start Button program, which gave me the Start menu functions that were missing from the Desktop. It’s easy configuration options also let me set Windows 8.1 to boot to the Desktop (I know that 8.1 could already boot to the desktop, but you had to go looking for the way to make the change). The things I set up on all the Windows computers, customizing things that are not solely Windows 8 issues, are Shutdown and Restart/Reboot icons on the Windows Desktop. I don’t like to have to go into Metro/Modern/Tiled Mode in Windows 8, just to be able to shut down the computer. Similarly, I didn’t like to have to go through the Start menu in earlier versions (or with a Start menu add-on for Win8) to shut down or restart my computer. The solution is to set up a Shutdown icon and a Restart icon. The shutdown.exe command has a large number of options, including Logoff and other options in addition to actually shutting down and restarting the computer. The first step in creating the Shutdown icon is to right-click on the Desktop, hover over the New option, and select Shortcut from the fly-out menu. The first task is to enter the appropriate command for the Shortcut to execute. In this case, we want to trigger an immediate shutdown of the computer, so the command is: shutdown -s -t 0 shutdown /s /t 0 The help command says that the syntax uses the slash, but the dash also works, and is the form that I’ve been using for years. To see a complete list of the options, execute the CMD command, either in the run box that’s available via Start8, or in Win8 Tiled mode, start typing CMD and select CMD.EXE. This will open a command window (we used to call these DOS windows). and press enter. The Shutdown command, without any parameters, is the request for help (don’t try to get help with the /h option — that’s the Hibernate option). After entering the Shutdown command and the appropriate parameters (the -t 0 means delay zero seconds before doing the -s shutdown parameter). Other options that I often use are -h for Hibernate, -r for Restart/Reboot and -l for Logoff. The resulting dialog box will be pre-populated with the name of the program to be executed. However, this is really a text field that is the title to be displayed on the Desktop. It does not need to have the name of the program! after you click Finish, you have a standard icon on the Desktop. It’s time to get an icon image that we want to use for easy recognition. Right-click on the new icon and pick Properties. In the resulting dialog box, click on the Change Icon… button, which will open the Change Icon dialog box. As you can see, the first thing you see is that this particular program does not have any icon images in its code. That’s ok. Just click OK, and pick one from the resulting dialog box. Notice that you can also browse to different files to see what icons they have. Once you have the Shutdown icon created, it’s time to make a Restart one. Just select the Shutdown icon, press Control-C to copy, click elsewhere on the Desktop, and then Control-V to paste (to create another icon). Label it Restart. Right-click and select properties. Change the -s (for shutdown/stop) to -r for restart. The Restart command is shutdown -r -t 0. Then, click the Change Icon button and select the icon image you want to use for your restart icon. For a logoff icon, change the -s to -i and delete the -t 0 parameter. If the -t 0 is there, the logoff will not work. That’s strange. The logoff command is only Above, you can see the three icons I’ve created for Shutdown, Restart and Logoff. All three icon images are in available in the set shown above in the Change Icon dialog box.
OPCFW_CODE
You are just from a specification meeting with your client. With you is a big list of features the client wants to see on the application. Probably its all in your notebook or note taking app. What do you do with it? Today we shall be discussing some basic techniques on how to properly manage your backlog. First things first Note apps we’re simply not designed with engineers in mind. A productivity tool is needed. Trello is one such tool. Simply create a new board, put in every item to be worked on into the tool. Some candidates for backlog include: - User stories - Work tasks - Areas which you/your team needs to gain competencies in - Major bugs (For ongoing projects) - Tech stories (The invisible work that developers see. An example would be migration scripts) Involve clients and developers Once you have your initial backlog list out. Its time to invite the clients and the team to view the items. This ensures everyone is on the same page about what exactly is to be developed. Encourage as much direct conversation between the client and the developers who will be actively writing the code. During this stage the client has the chance to prioritize what is important since they understand their business the most. The developers however are in charge of the estimations. However this does not mean that the parties operate in silos rather the clients should explain why certain items are priorities and the developers should explain why they have given the estimates. This is not a formalized process, casual conversation is expected and encouraged. Select first features for development The first features will likely correlate with the top priority features but it will not be an exact match. The first features to be developed should be developed based on criteria listed below: - Customer priority. The most important factor to consider - Time to market: Chose features that can enable the application to be quickly tested by end users. This enables quicker feedback cycles. - Resource utilization: Ensure that everyone on the team has something to work on. Ie if you have a DBA there should be some DB work - Co-dependency: Some features will depend on others, ensure to build the most depended ones first This is not an exhaustive checklist. Communication with your client and developers should land you on the ideal initial first set. Break down Epics Epics are user stories that are too big to implement directly. During meeting with the client, you probably gathered a lot of them. In fact at this stage your backlog is made up almost entirely of Epics. Now is the time to break them down to smaller stories that you can easily track. A good rule of thumb is if a user story takes more than 2 hours to develop it’s probably an epic. Assuming developer competency of course. Focus only on current sprint While it may at times be tempting to develop features directly from the backlog. Focus only on items on the current sprint. The items here can not be changed by the client and are easily estimate able. Furthermore you will be computing your metrics from the items on the sprint. Anything that is not improving is guaranteed by entropy to be getting worse. The backlog is a living list and needs to be constantly updated to align with the priorities of the users and reality as it unfolds. How do you manage your own backlog? Tell us in the comment section below.
OPCFW_CODE
M: Thoughts on Software Quality - daveman692 https://www.facebook.com/notes/facebook-engineering/thoughts-on-software-quality/10150154181208920 R: epenn Resisting the knee-jerk urge to rewrite a piece of software from scratch is something that has taken me a while to master. I'm much more levelheaded about it now. I try to think through whether or not there is a legitimate reason for the rewrite. Sometimes that reason certainly exists, but I've found more often than not that doing so is overkill. It's a lesson that has certainly saved me a lot of time since and allows me to focus on more important tasks. R: Stwerner Out of curiosity, what would you say are some legitimate reasons for doing major rewrites? I'm currently going back and forth on whether or not a rewrite is a good idea, and would like to hear what some people would consider good reasons. R: cpeterso Joel ("On Software") Spolsky might say never. If you have a system that is smelly but working, then rewrites of individual _subsystems_ lets you evolve the working system into something cleaner without any discontinuity of service. <http://www.joelonsoftware.com/articles/fog0000000069.html> R: Stwerner Yeah, I've read that before, and other articles like it. Does the feeling of 'this code sucks, needs rewritten' ever go away after inheriting an old, smelly, but working codebase? R: jpadvo "Sustained software quality is an extremely difficult challenge." \- (from the article) Quality can mean a lot of different things. Elegant, simple to understand code. Adherence to current best practices. Modular. And the list goes on... But the most important part of quality for actual users is that your software is solid -- it isn't going to break and cost them time or money. That kind of quality isn't particularly complicated to obtain -- it takes hard work, discipline, and prioritization, but there's nothing magical about it. "When you're a carpenter making a beautiful chest of drawers, you're not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You'll know it's there, so you're going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through." \- Steve Jobs R: hello_moto Unfortunately most developers don't want to put the effort to maintain quality. They just want to write a code, skipping writing automation test, and call it "magically done by me, the greatest hacker in this company, for under 1 hour". Forgive me with the sarcasm but that's apparently quite common. R: morganpyne I would counter that with the argument that most people paying the bills do not want to pay the extra short-term costs associated with doing things more correctly. I know that I'm probably preaching to the choir here when I say that this is obviously a false-economy and you are deferring costs and building technical debt by working like that, but clients often want to save a quick buck. Good clients understand that this isn't wise, and are golden to have as customers. Of course, there is often no real need to build code to be any more than 'just good enough' in the same way that many people are quite happy to purchase a plywood-backed piece of IKEA furniture instead of a solid mahogany hand- crafted piece. It serves their purposes just fine and saves them a ton of money. A good developer will similarly be able to determine what the precise definition of 'good enough' is for any given project scope/budget/time constraints. R: hello_moto I agree with you. I did not mention the bar of quality expected by the client. That is fall under the "negotiation" phase between the client and the service provider. There should be some sort of agreeable quality other than "I just want the best of everything". But the rule might be different for "product" based company than the "services" (consulting) based. R: mcav One of the things that draws me to functional programming* is the relative freedom from side-effects. A poorly designed, tightly coupled system begs to be rewritten, but doing so would come at an enormous cost. If a system is designed to snap together, with easily interchangeable parts, it's easier to be satisfied with evolving the software one piece at a time. Then, when the urge to rewrite something strikes, you can just rewrite that one component, knowing that everything else should work just fine. * I haven't actually built anything substantial in a functional language, other than my blog (in Clojure), so I can't back this up with experience. But I think it's true. R: jcromartie > Then, when the urge to rewrite something strikes, you can just rewrite that > one component, knowing that everything else should work just fine. I'm pretty sure this was the promise of OOP, too. Or at least the promise of the OOP that was sold to me in (limited) school and on the job. R: naradaellis A main reason that it is hard or impractical to refactor a part of an OO system is that you cannot reason about its side-effects on the rest of the system without scouring all of its source. When you are guaranteed that a subsystem has no side effects, you can gut its internals and as long as the interface behaves the same - your system will behave the same. edit: a _stateful_ OO system anyway. R: chernevik The author knows a lot more than I do. But I wonder. There's healthy frustration, which reflects a restless desire to fix _anything_ and thus can't measure the worth of anything, and unhealthy frustration, which despairs of ever making important improvement. Mistaking either for the other seems bad. If mature, talented people are losing interest in improvements because they seem futile, that seems like a danger signal to me. R: vehementi Good points to sate someone like me who gets pretty upset over lack of quality R: warrenwilkinson Sometimes software stinks because it IS bad. Perhaps the software he is using grew from simple to complex and nobody did any housework, thus the smell. Maybe it does need some work. R: danenania I think the point is that anyone's software would stink on some level when set against the toughness of some of the challenges a super-behemoth like Facebook faces. Some problems are much too difficult for even the best programmers in the world to solve and solve elegantly without multiple attempts. There's usually barely time for one attempt. But people still need to take on these problems, even if it makes them look bad, because eventually they will be ironed out, making way for new problems, bigger problems, and the circle goes on and on. R: DanielBMarkham _Coming back to the earlier examples, I think most of what we perceive as badness or decay are just emergent properties of a complex system in which we cannot focus on all aspects at all times_ This is a key insight, and it usually takes banging your head against architectures you've created yourself a few times in order for it to sink in. Early on, you think that somehow there must be a "perfect" way of coding. So whenever the system gets a lot of cruft, you feel as if you made a mistake somewhere. The much more likely culprit is the impossibility of keeping enough of the thing in your head at any one time in order to keep it consistent. I think FP helps with this a lot, and I think we're going to start seeing larger and larger systems moving to FP. FP is going to bring it's own problems, though, which is why I think a hybrid FP/OOP model, with classes "growing" up from REPL constructs to meet contractual obligations is going to be the future. You'll code in FP, then wire in OOP. R: timclark Wouldn't a functional language better for wiring? Doesn't having function composition and higher order functions give you a much more expressive language for building systems? R: HockeyBiasDotCo Agreed.
HACKER_NEWS
LFR Benchmark Graph – Five different Generators LFR Generator Undirected, unweighted Undirected, weighted Directed, unweighted Directed, weighted Hierarchical LFR Benchmark Graph -- Generator Input (Part 1) Inputs are the graph specifications. The components of inputs (take undirected, unweighted graph as an example) number of nodes average in-degree maximum in-degree mixing parameter minus exponent for the degree sequence minus exponent for the community size distribution minimum for the community sizes maximum for the community sizes number of overlapping nodes number of memberships of the overlapping nodes LFR Benchmark Graph -- Generator Input (Part 2) Input format Example: we use flags to represent the components of input for undirected unweighted LFR Graph Generator -Nnumber of nodes -kaverage in-degree -maxkmaximum in-degree -mumixing parameter -t1minus exponent for the degree sequence -t2minus exponent for the community size distribution -mincminimum for the community sizes -maxcmaximum for the community sizes -onnumber of overlapping nodes -omnumber of memberships of the overlapping nodes LFR Benchmark Graph – How to run the program Step 1: Compile Navigate to the generator file directory Type in “make” and enter. Step 2: Execute Method 1: directly type in the parameters Example:./benchmark -N 1000 -k 15 -maxk 50 -mu 0.1 - minc 20 -maxc 50 Method2: put all parameters inside the flag.dat file (RECOMMENDED) Example2:./benchmark -f flags.dat -t1 3 LFR Benchmark Graph – How to use the output graph. (Part 1/3) The program will produce three files (except for Hierarchical Graph): 1) network.dat contains the list of edges (nodes are labeled from 1 to the number of nodes; the edges are ordered and repeated once, i.e. source- target). LFR Benchmark Graph – How to use the output graph. (Part 2/3) 2) community.dat contains a list of the nodes and their membership (memberships are labeled by integer numbers >=1). LFR Benchmark Graph – How to use the output graph. (Part 3/3) 3) statistics.dat contains the in and out-degree distribution (in logarithmic bins), the community size distribution, and the distribution of the mixing parameter (in and out). Notice… For each test case provided, I put the generator executable file together with it, for future customization use. There are four most important files in each test cases: flag.dat, network.dat, community.dat, statistics.dat Do read the readme.txt file in each generator file if you have any problems! In addition… There are many other famous benchmark graphs! For getting those dataset, this link might help: http://www-personal.umich.edu/~mejn/netdata/ http://www-personal.umich.edu/~mejn/netdata/ For getting those visualized graph: http://studiy.tu- cottbus.de/~clustering/evaluation:comparison_to_lit eraturehttp://studiy.tu- cottbus.de/~clustering/evaluation:comparison_to_lit erature
OPCFW_CODE
2020, we are back better, faster and stronger; ready to kick off this new year! The Developer Program Sprint Demos are back. As part of the Developer Program, we invite you every last Wednesday of every month to our monthly Sprint demos where the engineering team demo what is coming next and answer your questions live. For this first Sprint Demo, on Wednesday 29th of January, we had new faces from the Tableau development team presenting what they have been working on. Analytics Extensions API: Extend Tableau Nathan Mannheimer, advanced analytics product manager at Tableau, kicked off the January Sprint Demos by previewing the first release of the Analytics Extensions API. The Analytics Extensions API allows Tableau users to dynamically extend Tableau's calculation language with any external programming language or service. It’s based on the TapPy Python integration that already allows Tableau users to execute Python scripts and saved functions via table calculations. Thanks to this API, we can extend these Tableau calculations even more with external languages and tools. This means you can create dynamic integrations that pass data to and from Tableau; for example, you can add a new programming language as a calculation engine to Tableau (like Tamas Foldi did with adding Haskell expressions as calculations at the #DataDev Hackathon at TC19) or call a web service directly from a calculation (like Craig Bloodworth, who created an example for getting the weather forecast from external Webservices). If you’re interested in learning more, we hosted a dedicated webinar before the holidays with Tamas and Michael Martin, who developed the integration with RapidMiner. The Analytics Extensions API will be in initial release starting with 2020.1, and the documentation is available on our GitHub. No break for Hyper: Time zone support, additional SQL functions, and PyPi support Since the October 2019 release of the Hyper API, the team continues to work hard releasing new features. This is their fourth release! Jonas Eckhardt, engineering manager, demoed the new features added to this January release during the event. This release brings three new features to our customers: time zone support, additional SQL functions, and PyPI support. One of our customers’ most requested features is time zone support. In particular, the TIMESTAMP WITH TIME ZONE (or TIMESTAMPTZ) type. Now, it’s properly supported, and we added it in a way that feels natural for each language by still enabling high performance. Many customers asked for more SQL functions and in the January release, we’ve documented (and officially support) many new functions: - Manipulation and formatting of date/time values and intervals, also with full-time zone support - Sub-query expressions (e.g., EXISTS, IN, ALL) - Window aggregate functions (e.g., RANK()) And finally, the most requested feature from our Python community! You can now finally install the Hyper API by simply typing pip install tableauhyperapi. That's it. Isn't that simple? Hyper API users, don’t forget to update to benefit from all the new features. New REST API Endpoints for View Recommendations With Tableau 2019.4, we released View Recommendations for Tableau Server or Tableau Online helping users discover relevant content faster. The recommendations are generated by our machine learning models based on usage patterns. And in 2020.1, we’re releasing new REST Endpoints for the REST API to get these recommendations programmatically and to hide or unhide specific views from being recommended. Connie Wong, Senior Product Manager, demonstrated these new endpoints. If your Tableau Server or site administrators hide a view from being recommended, this view will not be displayed on the server Home or Recommendation pages. This could be useful if you have Tableau embedded in your application and you still want to take advantage of this feature and display only the recommended views in your portal.
OPCFW_CODE
I've played Swords & Wizardry Whitebox, and really enjoyed it. I already have it as a DIY printed PDF booklet, so I decided to get Whitehack instead. I am looking for a simple D&D version for quickstarting some one-shot-sessions, but enough meat to play a campaign. I felt, that Whitebox did the trick extremely well. And it's easy to move to S&W Core from WB, if needed! But I am always open for new retro-clone-similacrum-OSR-systems, so I bought this one. This is not a review. I have not read the book yet (except checking how core mechanics work to get some idea what I want from character). Let's see how easy and fast it is to create a character without reading character creation rules beforehand! If you are in a hurry, just transfer a pregenerated character to a copy of the character sheet. Well that was fast! Now let's try to create a character ourselves! I didn't print a character sheet, so I have to draw one myself. Luckily the character sheet is simple (but damn awesome). Picture somewhere below. |This is a cool and so simple!| For 1st edition. STR: 10; DEX: 14; CON: 10; INT: 12; WIS: 16; CHA 13 I want to make a magic-user-guy! That's why I put high WIS, because it affects not only perception and insight but also magical abilities. Whitehack magic is different without spell list, so I'd want to try it out! Attribute scores affect different bonuses, but most of them are not listed yet. But from attribute descriptions I get: +1 to initiative from DEX, two extra languages from INT Character Classes. There are three classes, or archtypes. If you are familiar with True20 (Bluerose has one), this is similar. Deft are skilled, Strong are fighters, and Wise are magicians, alchemist, scholars, priests etc. I am going to be Wise. Magic I get I write a description myself. There is no spell list. Cool! Also spell casting is not Vancian memorize-and-forget, but you loose HP when you use spells. You've got twice spells than your slots. Slots are your spells that are ready to be cast. You must "memorize" spells to slots to use them. Nice! What I write down as a fresh, first level Wise, is: HD: 6 (1d6+1); AV (attack value): 10; ST (Saving Throw): 6; Slots (for miracles) 1: Groups (for mechanical benefices): 2; Raises (which level attributes +): 0 My healing rate is doubled, but I cannot be healed with spells, medicine or skilled treatment. Shields and heavier than leather armor make me pay double HP for miracles. I also get -2 AV (to-hit) for two-handed weapons, except ye wizard weapons (like staff). +2 to save versus magic and mind tricks. Class advantages and restrictions sound extremely reasonable and are totally fine with me. Great! I imagine that double healing rate but unable to boost healing means, that you basically automatically treat yourself all the times. Groups. Broad definition of skills. Groups are attached to an attribute, so the group can give benefices to an attribute roll. My groups are: Preacher, to CHA Duelist, to DEX Preaching duelist? Up yours D&D and sharp weapons restrictions for Clerics! Now there are pregens, and names in a weird place cutting chapter in half. Not confusing, just weird layout. Species. Now it tells me that my first group must be species! Damned. Wait, I fix previous groups I already decided. I think that species should be earlier in any character creation rules. Here's a great example why ;) Oh wait a sec! Groups section says I don't have to choose all groups right away, but can choose two (I did above). Broad definition of skills... bla bla. Write next to an atrribute, check. How to use, two per attribute max etc. Gotcha! Species section right after says that the first type of group is the character's species. It must be chosen at a character creation bla bla. I can note them after two attribute, but if I am half-demihuman I only write it down one attribute. Second is my vocation (but if I am full-dwarf I already used my two groups to be dwarf-dwarf so I can't get a vocation). Vocation is barbarian, or wizard or something. Like your class description (I am a Wise, so barbarian vocation would be interesting concept). If I don't use Groups for species, I am a human. SO if I want to be human wise-preacher-duelist I don't have to use my slots for races. These two chapters, groups and species, could have been written 100 times better! They suck the first time. Booo. Oh wait, what! Species, vocations, affiliations, are all under GROUPS. So I sucked at reading. Affiliations are guilds, societies and what-not. So you can use your two group slots for species, vocations, and affiliations. I am going with what I already got, even though they are not beforementioned. I need to go on, I've been stuck with this too long already! Gold & Equipment. 3d6 x 10 gold, I've got 60. I am poor. Leather armor costs 15 and gives AC 2. Quarterstaff with damage 1d6-1 costs 1. Shortsword with 1d6-1 damage costs 8. Total cost is 24. Rest of the money I use to buy trinkets. I hate buying starting equipment, so I skip this. Calculating equipment weight is also as boring as it gets! Calculating Armor Class. Unarmored 0 + Leather armor = 2. Yay! If they want to hit me, they need to roll between 2 and their attack value. My AV is 10, so I need to roll 2-10 with d20 to hit myself. Languages. From the campaign world, which is unknown for me. I speak my own language and something extra. Maybe divine, because I am a preacher. Character creation is quick, if you know what you are doing. I didn't read it beforehand, so I had a hiccup there with Groups. I think that the character creation is a bit wordy, and all things could be explained so much easier for even quicker results. There should be a fast character creation check list or something to get through with it. But if you know what you are doing, this is good. Enough similarities to know what you are doing, but enough simple variations to make your character stand out really easy. I need to try this with beginner group to check out how well they grab the Groups concept. In my experience beginners can think out of the box, when grognards only think Fighting Men, Magic-Users and Elves.
OPCFW_CODE
How to get the Average on a count of grouped rows In a single table I'm trying to group records on a column, count the total results in each group, and then get the average of the counts: SELECT AVG(cnt) FROM (SELECT COUNT(*) AS 'cnt' FROM birds GROUP BY birdID ); So the table might look like: birdID ------ Robin Robin Robin BlueJay BlueJay BlueJay Falcon The counts for each: Robin - 3 BlueJay - 3 Falcon - 1 And the overall average is 3 + 3 + 1 = 7 / 3 (total groups) = 2.3 My nested query works fine. But it seems like it could be done with one query, and not two. AVG(COUNT(*)) doesn't work, though that's kinda what I'm trying to get at. Sql returns the error 'misuse of aggregate function COUNT()' when using AVG(COUNT(*)) "it seems like it could be done with one query, and not two." it can't, unless you use a windowed AVG(COUNT(*)) OVER (), in which case you get 3 rows the same, and it's less efficient. What you have is correct (note that other database systems require the count to be casted to decimal first). This should do the job... SELECT ROUND(AVG(CNT), 2) AS "AVERAGE_COUNT_OF_IDS" FROM ( SELECT BIRD_ID, COUNT(*) AS "CNT" FROM tbl GROUP BY BIRD_ID ); /* R e s u l t : AVERAGE_COUNT_OF_IDS -------------------- 2.33 */ ... see it here https://sqlfiddle.com/sqlite/online-compiler?id=9be29b9c-3f57-49c8-afaa-5aeb598e3bc2 If you want it without subquery then try this: -- transform counts to decimal values Select Round( Round(COUNT(*), 2) / Round(COUNT(DISTINCT BIRD_ID), 2), 2) AS "AVG_CNT" From tbl; /* R e s u l t : AVG_CNT --------- 2.33 */ ... see it here https://sqlfiddle.com/sqlite/online-compiler?id=9c5fb9ac-9631-4638-b0e6-6ccaaa37bc00 That's the same as OP, apart from the rounding, which because it's odne on the outside is immaterial. @Charlieface I agree, but with a small difference - this for some reason works (see fiddle). We don't know OP's context which makes it hard to tell what is the problem there. But so does OP's https://sqlfiddle.com/sqlite/online-compiler?id=9be29b9c-3f57-49c8-afaa-5aeb598e3bc2 @Charlieface I agree again. Conclusion: OP's context (unknown to us) - something else causes the error. Adding a second output to the fiddle: SELECT AVG(COUNT(*)) AS '2nd avg' FROM tbl GROUP BY BIRD_ID; doesn't produce output. It's likely an error like I had originally. It seems a nested query is necessary. @ScottAlanTurner Just posted the code without subquery...
STACK_EXCHANGE
cannot find where a variable is used in a C++ source code I am beginning with C++, and I am studying the source code of the HElib library that I am going to work with. In the file Test_General.cpp there is a variable k (the "security parameter" according to line 303). Question: Is this varibale unused in this file ? The function void TestIt line 57 takes k as parameter but I don't see any line where it uses it. From my knowledge of the scheme it should be used during the setup phase, typically during the building of context line 79 or shortly after it. Is it possible that this variable is used in this function while it is not noticeable in this file ? It is not used. Why don't you ask the maintainer of that project about it? @robert The maintainer are a small team and are dealing with many technical questions already. Since it was possible that this question was coming from my simple lack of knowledge about C++ I felt like asking on a general programming forum first. But if as you say it is unused then now I'm going to ask the developers why it is so. Seeing as it's a testing function, I predict that the answer is "we haven't really got around to using it yet". SO isn't a forum and I don't see how this is on-topic. It appears to be used here: TestIt(R, p, r, d, c, k, w, L, m, gens, ords); Yes, however, that didn't answer the question. The OP said k appears to be unused in the function, not the file. It is used in std::cerr to form error message. There is no rule that forces programmer to do something with variable, maybe it is just a flag that specifies something useful in error message. Also it might be some backward compatibility. So function can be used with old code which passes 11 parameters and to be compatible with new-one that strictly specifies only 10 (but passes 11 anyway). And there is a thing related to potential future problems, so k may be reserved variable and may become meaningful just in future (so for now you can pass anything). It is hard to say because there may be a lot of different reasons, if you want truly right answer, you should contact person who wrote that code. My bad, it is used at line 349 to compute m: long m = FindM(k, L, c, p, d, s, chosen_m, true); then m is used to build the context at line 79: FHEcontext context(m, p, r, gens1, ords1); So everything is fine now. Sorry for the annoyance.
STACK_EXCHANGE
package gui import ( "regexp" "sync" "fyne.io/fyne/layout" "fyne.io/fyne" "fyne.io/fyne/app" "fyne.io/fyne/widget" "github.com/galamiram/nadctl/internal/nadapi" log "github.com/sirupsen/logrus" ) // GUI - nadctl GUI object type GUI struct { app fyne.App window fyne.Window device *nadapi.Device buttons map[string]*widget.Button labels map[string]*settingLabel } type refreshFuncType = func() (string, error) type settingLabel struct { label *widget.Label refreshFunc refreshFuncType mux sync.Mutex } func (s *settingLabel) refresh() error { data, err := s.refreshFunc() if err != nil { return err } //s.mux.Lock() s.label.Text = data s.label.Refresh() //s.mux.Unlock() return nil } // New - create new nadctl GUI func New(device *nadapi.Device) (*GUI, error) { var ( gui *GUI ) gui = &GUI{ device: device, app: app.New(), buttons: make(map[string]*widget.Button), labels: make(map[string]*settingLabel), } model, err := gui.device.GetModel() if err != nil { return nil, err } gui.window = gui.app.NewWindow(model) gui.window.Resize(fyne.NewSize(300, 400)) gui.window.SetContent( fyne.NewContainerWithLayout( layout.NewGridLayoutWithRows(4), fyne.NewContainerWithLayout( layout.NewGridLayoutWithColumns(2), fyne.NewContainerWithLayout( layout.NewGridLayoutWithRows(2), gui.addLabel("Power", gui.device.GetPowerState), gui.addButton("Power", func() { gui.device.PowerToggle() }), ), fyne.NewContainerWithLayout( layout.NewGridLayoutWithRows(2), gui.addLabel("Mute", gui.device.GetMuteStatus), gui.addButton("Mute", func() { gui.device.ToggleMute() }), ), ), fyne.NewContainerWithLayout( layout.NewGridLayoutWithColumns(3), gui.addButton("Vol -", func() { go gui.device.TuneVolume(nadapi.DirectionDown) }), fyne.NewContainerWithLayout( layout.NewCenterLayout(), gui.addLabel("Volume", gui.device.GetVolume), ), gui.addButton("Vol +", func() { go gui.device.TuneVolume(nadapi.DirectionUp) }), ), fyne.NewContainerWithLayout( layout.NewGridLayoutWithColumns(3), gui.addButton("Brgtns -", func() { go gui.device.ToggleBrightness(nadapi.DirectionDown) }), fyne.NewContainerWithLayout( layout.NewCenterLayout(), gui.addLabel("Brightness", gui.device.GetBrightness), ), gui.addButton("Brgtns +", func() { go gui.device.ToggleBrightness(nadapi.DirectionUp) }), ), fyne.NewContainerWithLayout( layout.NewGridLayoutWithColumns(3), gui.addButton("<", func() { go gui.device.ToggleSource(nadapi.DirectionDown) }), fyne.NewContainerWithLayout( layout.NewCenterLayout(), gui.addLabel("Source", gui.device.GetSource), ), gui.addButton(">", func() { go gui.device.ToggleSource(nadapi.DirectionUp) }), ), ), ) gui.refreshLabels() go gui.listener() return gui, nil } // Start - start the GUI func (gui *GUI) Start() { gui.window.Show() gui.app.Run() } func (gui *GUI) addButton(text string, action func()) *widget.Button { button := widget.NewButton(text, func() { go action() }) gui.buttons[text] = button return button } func (gui *GUI) addLabel(setting string, f refreshFuncType) *widget.Label { label := &settingLabel{ label: widget.NewLabelWithStyle("", fyne.TextAlignCenter, fyne.TextStyle{Bold: true}), refreshFunc: f, } label.label.Alignment = fyne.TextAlignCenter gui.labels[setting] = label return label.label } func (gui *GUI) refreshLabels() { for _, label := range gui.labels { label.refresh() } } func (gui *GUI) listener() { r, _ := gui.device.GetRead() for { str, err := r.ReadString('\n') if err != nil { return } f := getFunctionName(str) log.WithField("f", f).Debug() if lbl, ok := gui.labels[f[1]]; ok { lbl.label.Text = f[2] lbl.label.Refresh() } } } func getFunctionName(s string) []string { compRegEx := regexp.MustCompile(`.*\.([a-zA-Z]*)=(.*)\r\n`) match := compRegEx.FindStringSubmatch(s) if len(match) > 0 { return match } return []string{} }
STACK_EDU
/** * Created by yxia on 9/8/15. */ angular.module('firebase.helper', ['firebase', 'firebase.utils', 'angularGeoFire']) .factory('Auth', function ($firebaseAuth, fbutil) { return $firebaseAuth(fbutil.ref()); }) .service('fbMessageService', function (fbutil, $firebaseArray) { this.sendMessage = function (auth, receiver_uid, message) { var authData = auth.$getAuth(); if (authData) { var userReference = fbutil.ref("users/"+ receiver_uid); var syncArray = $firebaseArray(userReference.child("messages")); console.log('debugg....'); console.log(authData.uid); console.log(message); syncArray.$add({sender: authData.uid, message: message}).then(function () { console.log('message sent'); }); } } }) //$scope.sendMessage = function (name, text, uid_of_reciever) { // var authData = $rootScope.fbAuth.$getAuth(); // if (authData) { // var userReference = $rootScope.fb.child("users/" + uid_of_reciever); // var syncArray = $firebaseArray(userReference.child("messages")); // syncArray.$add({name: name, text: text}).then(function () { // }); // } else { // } //}; //.service('geoFireService', function (fbutil, $geofire) { // var geo = $geofire(fbutil.ref('users')); // // this.set = function (key, location) { // return geo.$set(key, location); // }; // // this.get = function (key) { // return geo.$get(key); // }; // // this.query = function (center, radius) { // return geo.$query(center, radius) // } //}) .service('fbGeoService', function (fbutil, $firebaseArray, $geofire, $rootScope) { this.set = function (auth, location) { var authData = auth.$getAuth(); var geo = $geofire(fbutil.ref("locations/")); geo.$set(authData.uid.toString(), location); }; this.get = function (auth) { var authData = auth.$getAuth(); var geo = $geofire(fbutil.ref("locations/")); geo.$get(authData.uid.toString()).then(function (location) { if (location === null) { console.log("Provided key is not in GeoFire"); } else { console.log("Provided key has a location of " + location); } }, function (error) { console.log("Error: " + error); }); }; this.queryLocation = function (center, radius, maxDistance) { var locations = $geofire(fbutil.ref("/locations")); var locationsQuery = locations.$query({ center: center, radius: radius }); var locationQueryCallback = locationsQuery.on("key_entered", "SEARCH:KEY_ENTERED"); var locationQueryCallback1 = locationsQuery.on("key_moved", "SEARCH:KEY_MOVED"); $rootScope.$on("SEARCH:KEY_ENTERED", function (event, key, location, distance) { console.log("KEY ENTERED FOUND"); $rootScope.otherUsersLocations.push({userId: key, location: location}); // Cancel the query if the distance is > 5 km if (distance > maxDistance) { locationQueryCallback.cancel(); } }); $rootScope.$on("SEARCH:KEY_MOVED", function (event, key, location, distance) { console.log("KEY MOVED FOUND"); // Cancel the query if the distance is > 5 km if (distance > maxDistance) { locationQueryCallback1.cancel(); } }); }; //this.queryR = function (auth) { // var authData = auth.$getAuth(); // var geo = $geofire(fbutil.ref("users/" + authData.uid)); // var query = geo.$query({ // center: [37.785583, 122.399219], // radius: 20 // }); // // console.log(query); // // query.on("key_entered", function (key, location, distance) { // console.log(key + " entered query at " + location + " (" + distance + " km from center)"); // }); // // query.on("key_moved", function (key, location) { // console.log(key + " entered query at " + location); // }); // // return query; //} }) //.controller('AddLocationCtrl', function ($scope, Auth, fbutil, $firebaseArray, $geofire) { // $scope.addLocation = function (lat, lng) { // var lat_i = parseInt(lat); // var lng_i = parseInt(lng); // var authData = Auth.$getAuth(); // if (authData) { // var geo = $geofire(fbutil.ref("users/" + authData.uid)); // //var userReference = fbutil.ref("users/" + authData.uid); // //var syncArray = $firebaseArray(userReference.child("messages")); // geo.$set('location', [lat_i, lng_i]) // } // }
STACK_EDU
Computed metrics are something I wanted to do for a very long time, already in RHQ, but never really got around it and sort of forgot about it again. Lately I found a post that contained a DSL to do exactly this (actually you should read that post not because of the DSL, but because of the idea behind it). After seeing this, I got the idea on what to do and to include this in HawkFX, my pet project, which is an explorer for Hawkular. HawkFx with the input window for formulae, that shows a formula and also a parser error. The orange chart shows Non-Heap used, the redish one the heap usage of a JVM. Formulas are in a DSL that looks a bit like UPN, e.g. as in the following (I've shortened the metric ID for readability, more on them below): (+ metric( "MI~...Heap Used" , "max") metric( "MI~...NonHeap Used", "max")) to sum up two metrics (see also screenshot below). The 'metric' element gets two parameters, the metric id and also which of the aggregates that the server sends should be taken (in this case the max value) - this comes from the fact that we request the values to be put into 120 buckets by the server. Or if you have the total amount of memory you could also subtract the used memory to get a graph of the remaining: (- 1000000 metric( "MI~...NonHeap Used", "max")) You could also get the total wait time for responses at a point in time when you multiply the average wait time with the number of visitors: (* metric("MI~..ResponseTime","avg") metric("MI~..NumberVisitors","sum")) Computed total memory usage Summing up the metrics for 'Heap Used' and 'NonHeap Used' as shown above would then give you a nice graph of the total memory consumption of a JVM: The green chart now shows to combined memory usage of Heap and Non-Heap, which is computed from the other two series. Orange and red are as above. On metric IDs Metric IDs are the IDs under which a metric is stored inside of Hawkular. The example here comes from an installation of Hawkular-services in Docker. If you just feed your metrics into Hawkular metrics, the IDs will looks like the ones you are using. ID (upper) and path fields (lower) for a selected item in the tree I have just pushed an update to HawkFX that provides the ID and path in their own fields at the bottom of the main window, so you can copy&paste them. I will talk more about the parser in an upcoming article. For now it is a personal playground to also better understand what is doable here. If this turns out to be successful I can imagine that the DSL could directly be incorporated into Hawkular-metrics so that the rules are available to all metrics clients. It would of course be cool to have an editor for the formulas that allows to interactively pick metric IDs etc, but I doubt that I will get to this any time soon.
OPCFW_CODE
Hyperlink is a word, phrase, text, image button, or element which will jump or navigate to the other locations of the same or different document. Hyperlinks are used in different cases in order to provide practical tracking. In this tutorial we will examine and learn, what is a hyperlink, how does hyperlink work, creating a hyperlink, hyperlink types, etc. What Is Hyperlink? The hyperlink is a link to the other location on the same or different document or web page. Hyperlinks generally used on web pages in order to provide easy access to the other part of the same web page or different web pages. Also, different web sites or domains can be linked by using a hyperlink. Generally, hyperlinks are depicted differently than the current text or content where it can be distinguishable in order for the user. In some situations like an image hyperlink when the mouse is moved over the image, the cursor icon will change in order to express the image link. Hyperlink definition is a bit of science-related where the term link is preferred in general. Most of the experts use link in order to express a hyperlink. In most cases, the World Wide Web, WWW, or simply internet use hyperlinks as the link. Hyperlinks can connect different sources like image, document, sound, video, service, etc by using URL. Hyperlinks On Web Pages As stated previously hyperlinks are generally used in web pages. A hyperlink can be created with the markup language named HTML. HTML can link hyperlinks into different elements like Section, URL, File, Image, Video, etc. In the following screenshot, each image and text depicted with red square provides a hyperlink to the related web page. Colors are an important part of the hyperlinks where colors are used to express the hyperlink and if it is visited before. By default, hyperlinks are colored with different colors where blue is de facto color. This will make hyperlink different from the normal text which is generally black. Below we can see that links are taken into a red square. We also see that there are some other hyperlinks that are red which means this hyperlink is already visited. We can create a hyperlink in HTML easily by using the <a> tag which is named as anchor. We will also provide the link address we want to redirect or navigate with the href is a short form of hyperlink reference. Below we will create a hyperlink which is named POFTUT where it will link to <html> <body> <h2 id="hyperlinks">Hyperlinks</h2> <p>This is a link to the web site <a href="https://www.poftut.com">POFTUT</a></p> <p>This is a link into poftut.com web page of <a href="https://www.poftut.com/category/linux/">LINUX CATEGORY</a></p> <p>This is a link in this page element <a href="#hyperlinks">HYPERLINKS</a></p> </body> </html>
OPCFW_CODE
I got my first job as an SMS Administrator over 16 years ago. At that time we had just upgraded to SMS 2003 from SMS 2.0. Back then when we wanted programs to install in a given order we chained them together using the option to run another program first on the requirements tab of the program. A few years later, in SCCM 2007, Microsoft released this cool thing they called “Task Sequences”. I remember being on campus in Redmond once and a MS employee saying to the group of MVP’s there, “I know you guys think you can cure cancer with task sequences”. The point of that comment was that Microsoft created task sequences without having envisioned many of the creative ways that SCCM Admins would use them. A tool that Microsoft designed to be used for operating system deployment was quickly adapted to do so much more. So much so that back in 2008 or 2009 publicly predicted in SCCM 2012 the task sequences node would no longer be under Operating System Deployment. Guess I got that one wrong. Based on the title of my blog you may be asking yourself if you missed some major announcement. Has Microsoft truly added task sequences to Intune? Well no, they have not. Not only do we not have task sequences in Intune, we have no way at all to control the order in which things happen. Those of use who’ve been control freaks in SCCM for a very long time yearn for the ability to be the same control freaks in Intune. Many use this lack of control as a reason to shy away from adopting Intune. Even if you’re not one of those control freaks who wants this level of control just because we’ve always had it if you haven’t already run in a case where you need to ensure that action A always happens before action B you will at some point. Something else us old guys used to do back in the day was to use wrappers. See back then vendors who wrote the majority of software we wanted to deploy had never even heard of SMS. Those guys had no idea that we could install their software on 1000’s on PC’s without sending a Technician with a CD to each and every desk. I’m not going to go in to great details here about all then various tools that were used as wrappers back then, but there’s probably someone of you who’ve never even heard the term before. There’s likely even some of you who’ve never had to repackage an app so that it could be deployed from SCCM. Frankly I was never good at it, I don’t think many of us were. The guys who were good at it, those guys were a special breed of expert. “Packagers”, were guys used tools like Admin Studio, Orca, and SMS Installer and that’s all they did. As the software vendors realized the benefits of .msi installers and as customers demanded that apps ship as msi’s we had to repackage less and less apps. I don’t think I’ve had to have one done in over 10 years now. Welcome to 2019! Intune is all the rage! We’ve got pointy haired Managers coming to us telling us that we need to “move everything to the cloud” no matter what functionality we lose. One of the major ones that we lose when we go to Intune for software deployment is that it is much less robust than SCCM when it comes to deploying apps. And I’m using the term “apps” VERY loosely here. An app could be anything from a line of business app, to a script, a file that needs to get copied to every computer, a shortcut, a screensaver…. you name it and we’ve deployed it from SCCM. My typical blogs are short, sweet and to the point. If you are still with me that’s awesome because for once I felt like I needed to provide some backstory to explain why I think this blog may be useful. Or if you are like me when I read, you skipped all the crap above and you’re just looking for the technical stuff I’m getting there. I’m going to show you an old trick that can be used to solve a new challenge. In this incredibly crude example I will show you how to use Advanced Installer (which I’ve chosen because it’s cheap and one of the easier tool to use) to copy a bunch of files to a PC and then perform actions in a given order. If you missed it above when I said I was never very good at this it may become apparent soon. This blog is not meant to be a simple step-by-step guide on how to solve a given challenge. Instead it’s me showing you that not all of your Intune fears are justified and how, with a little creativity, you can work around some things. I’ve created a folder that contains a crude batch file and a subfolder. In the subfolder I have a few installers and some other things that I want to do one new PC’s. For instance I want to installer Acrobat Reader, then install a patch for Acrobat reader followed by setting a reg key that disables protected view. Here’s what the tree looks like when I grep it. And my batch file looks like this: Obviously deploying my batch file along with the supporting files is not possible natively with Intune but it’s very easy with Advanced Installer. I simply create a new project, and add my entire source folder as a temporary folder. Next I create a custom action to launch a file and point to my batch file. Next with the click of one button I build my .msi When run my msi will extract the files and folder to a temp directory and then run my bat file. Notice when I built my msi I selected to have it not register with Windows installer and to wait until my custom actions finishes before completing. And that’s all there is to it. I’ve just deployed several files and folders as a single msi from Intune and I was able to control the order in which things happened through my batch file. The possibilities of what can be accomplished using a wrapper and someone who actually knows how to use them are endless. This was just a crude example to get you thinking about what you need to do and how you can do accomplish it.
OPCFW_CODE
[NEXT] ReferenceInput on Filter: No data when the reference is the same with the resource of the List (i.e. not foreign key) I am using the latest commits on next, after BETA-3, as of today. This problem did not exist up to BETA-3. It was introduced with the recent changes. What you were expecting: ReferenceInput in Filter to be able to show data, like it used to be in next BETA-3. What happened instead: Steps to reproduce: This happens only when the reference (users) is the of the same resource that the List has. It does not happen for "foreign keys", i.e. when the reference is from other resources. See code below: Related code: // THIS REFERENCEINPUT IS **OK** // <ReferenceInput label="ΡΟΛΟΣ" source="role_id" reference="roles" allowEmpty alwaysOn sort={{field: 'name', order: 'ASC'}} perPage={1000} > <SelectInput optionText="nameGreek" /> </ReferenceInput> // THIS REFERENCEINPUT IS **NOT OK**, see screenshot above // <ReferenceInput label="ΕΠΩΝΥΜΟ" source="id" reference="users" allowEmpty alwaysOn sort={{field: 'last_name', order: 'ASC'}} perPage={1000} > <SelectInput optionText="last_name" /> </ReferenceInput> Based on the UI screenshot above, we should have 3 "references" in possibleValues, but we are missing one (the users@id one): Environment Admin-on-rest version: commit #6d2f265, i.e. latest commit as of today Last version that did not exhibit the issue (if applicable): BETA-3 I don't have the model to reproduce the bug, could you try to narrow the commit which caused it using git bisect? The only culprit I can think of is https://github.com/marmelab/admin-on-rest/pull/1616/files#diff-0c7e48ec3f03db5532bb24babd4872b8, but I don't understand how it would break the feature. 008b97e70179a1369f596da759b6b18575c136ab is the first bad commit, based on the bisect. This could be related to: https://github.com/marmelab/admin-on-rest/pull/1621 @tkvw I added your code to the latest next, did make build and this used it. Unfortunately it had no difference. Do you BTW experience the same problem, if you use a ReferenceInput Filter that references the resource used by the List itself? @afilp : Check your redux action log, do you see all the Fetch actions? I only get 2 MATCHING_SUCCESS (not 3), if this is what you are asking: Not sure if this gives any hint, I removed the {...props} from the filter just to see... ...and now the problematic field does get data: (of course this is just a hack to see how props affected the problem since the display has problems with duplicate select boxes, etc.) Found the cause of the problem, this happens when the filter reference (same as resource) starts as alwaysOn: If it is not alwaysOn and you choose it manually to use it, then the data is retrieved fine (works). I hope this helps you narrow down the problem in the code. It should be fixed in next, thanks to #1621. Can you confirm? I confirm that it works, great work, thanks! Fixed by #1621
GITHUB_ARCHIVE
We are proposing a Community Zebrafish Resource for Modeling GWAS Biology that will exploit existing expertise within our institutions in zebrafish genetics, bioinformatics, zebrafish assay development, genetic modeling and mechanistic studies. These studies will lay the foundation for exploration of the gene networks underlying common human disease phenotypes, and establish high-throughput biology in the zebrafish as a platform to complement GWAS across a broad range of traits. Importantly, this approach is readily adapted to drug response phenotypes and novel traits as they emerge. The Specific Aims are; Aim 1 -Initial feasibility assessment and assay development a) Bioinformatics-An initial evaluation of the traits to assess the feasibility of modeling in the zebrafish combined with bioinformatic identification of true orthologs, reagent design and where possible in silico prioritization of candidates. In addition we will specifically explore the relationships between candidate causal SNPs (identified from 1000 genomes data [26, 27]) and the latest tissue-specific ENCODE maps to define the transcription factor networks that may be impacted by the common variants [28, 29]. b) Assay design-We will build representative and quantitative assays for the phenotypes of interest, and anchor these to existing human genotypes and phenotypes using known manipulations of known Mendelian genes regulating the phenotype. Aim 2 -Systematic evaluation of candidate genes and non-coding variants across multiple loci-Once the phenotypic assays have been validated, we will test in the zebrafish each of the candidate genes and regulatory sequences (where the orthologs can be identified) for their effects alone and in combination on the primary trait . Quantitative assessments will be generated for loss of function and gain of function alleles, using existing mutants, morpholinos and transient or stable transgenesis. We propose to study approximately 15-20 GWAS loci per year. Aim 3 -Establishing zebrafish models for downstream discovery-Once we have established the causal genes underlying each GWAS locus, we will develop stable loss of function (using TALEN or zinc finger nuclease technology) or gain of function alleles for each gene [31 -33]. In addition, where relevant we will generate stable reporter strains for subsequent genetic or chemical screens. These lines will be made freely available to the community to accelerate the translation of completed and ongoing GWAS. Modern human genetic studies generate new markers of disease much more rapidly than biologists can study the underlying mechanisms. We propose to generate a community resource to allow investigators to use new high-throughput techniques in the zebrafish to explore the mechanisms of their recent genetic results. This resource will identify the genes causing major common human diseases and will generate animal models to allow additional studies of disease mechanism or potentially drug discovery.
OPCFW_CODE
According to modern models of physical cosmology, a dark matter halo is a basic unit of cosmological structure. It is a hypothetical region that has decoupled from cosmic expansion and contains gravitationally bound matter. A single dark matter halo may contain multiple virialized clumps of dark matter bound together by gravity, known as subhalos. Modern cosmological models, such as ΛCDM, propose that dark matter halos and subhalos may contain galaxies. The dark matter halo of a galaxy envelops the galactic disc and extends well beyond the edge of the visible galaxy. Thought to consist of dark matter, halos have not been observed directly. Their existence is inferred through observations of their effects on the motions of stars and gas in galaxies and gravitational lensing. Dark matter halos play a key role in current models of galaxy formation and evolution. Theories that attempt to explain the nature of dark matter halos with varying degrees of success include cold dark matter (CDM), warm dark matter, and massive compact halo objects (MACHOs). The presence of dark matter (DM) in the halo is inferred from its gravitational effect on a spiral galaxy's rotation curve. Without large amounts of mass throughout the (roughly spherical) halo, the rotational velocity of the galaxy would decrease at large distances from the galactic center, just as the orbital speeds of the outer planets decrease with distance from the Sun. However, observations of spiral galaxies, particularly radio observations of line emission from neutral atomic hydrogen (known, in astronomical parlance, as 21 cm Hydrogen line, H one, and H I line), show that the rotation curve of most spiral galaxies flattens out, meaning that rotational velocities do not decrease with distance from the galactic center. The absence of any visible matter to account for these observations implies either that unobserved (dark) matter, first proposed by Ken Freeman in 1970, exist, or that the theory of motion under gravity (general relativity) is incomplete. Freeman noticed that the expected decline in velocity was not present in NGC 300 nor M33, and considered an undetected mass to explain it. The DM Hypothesis has been reinforced by several studies. The formation of dark matter halos is believed to have played a major role in the early formation of galaxies. During initial galactic formation, the temperature of the baryonic matter should have still been much too high for it to form gravitationally self-bound objects, thus requiring the prior formation of dark matter structure to add additional gravitational interactions. The current hypothesis for this is based on cold dark matter (CDM) and its formation into structure early in the universe. The hypothesis for CDM structure formation begins with density perturbations in the Universe that grow linearly until they reach a critical density, after which they would stop expanding and collapse to form gravitationally bound dark matter halos. These halos would continue to grow in mass (and size), either through accretion of material from their immediate neighborhood, or by merging with other halos. Numerical simulations of CDM structure formation have been found to proceed as follows: A small volume with small perturbations initially expands with the expansion of the Universe. As time proceeds, small-scale perturbations grow and collapse to form small halos. At a later stage, these small halos merge to form a single virialized dark matter halo with an ellipsoidal shape, which reveals some substructure in the form of dark matter sub-halos. The use of CDM overcomes issues associated with the normal baryonic matter because it removes most of the thermal and radiative pressures that were preventing the collapse of the baryonic matter. The fact that the dark matter is cold compared to the baryonic matter allows the DM to form these initial, gravitationally bound clumps. Once these subhalos formed, their gravitational interaction with baryonic matter is enough to overcome the thermal energy, and allow it to collapse into the first stars and galaxies. Simulations of this early galaxy formation matches the structure observed by galactic surveys as well as observation of the Cosmic Microwave Background. A commonly used model for galactic dark matter halos is the pseudo-isothermal halo: where denotes the finite central density and the core radius. This provides a good fit to most rotation curve data. However, it cannot be a complete description, as the enclosed mass fails to converge to a finite value as the radius tends to infinity. The isothermal model is, at best, an approximation. Many effects may cause deviations from the profile predicted by this simple model. For example, (i) collapse may never reach an equilibrium state in the outer region of a dark matter halo, (ii) non-radial motion may be important, and (iii) mergers associated with the (hierarchical) formation of a halo may render the spherical-collapse model invalid. where is a scale radius, is a characteristic (dimensionless) density, and = is the critical density for closure. The NFW profile is called 'universal' because it works for a large variety of halo masses, spanning four orders of magnitude, from individual galaxies to the halos of galaxy clusters. This profile has a finite gravitational potential even though the integrated mass still diverges logarithmically. It has become conventional to refer to the mass of a halo at a fiducial point that encloses an overdensity 200 times greater than the critical density of the universe, though mathematically the profile extends beyond this notational point. It was later deduced that the density profile depends on the environment, with the NFW appropriate only for isolated halos. NFW halos generally provide a worse description of galaxy data than does the pseudo-isothermal profile, leading to the cuspy halo problem. where r is the spatial (i.e., not projected) radius. The term is a function of n such that is the density at the radius that defines a volume containing half of the total mass. While the addition of a third parameter provides a slightly improved description of the results from numerical simulations, it is not observationally distinguishable from the 2 parameter NFW halo, and does nothing to alleviate the cuspy halo problem. The collapse of overdensities in the cosmic density field is generally aspherical. So, there is no reason to expect the resulting halos to be spherical. Even the earliest simulations of structure formation in a CDM universe emphasized that the halos are substantially flattened. Subsequent work has shown that halo equidensity surfaces can be described by ellipsoids characterized by the lengths of their axes. Because of uncertainties in both the data and the model predictions, it is still unclear whether the halo shapes inferred from observations are consistent with the predictions of ΛCDM cosmology. Up until the end of the 1990s, numerical simulations of halo formation revealed little substructure. With increasing computing power and better algorithms, it became possible to use greater numbers of particles and obtain better resolution. Substantial amounts of substructure are now expected. When a small halo merges with a significantly larger halo it becomes a subhalo orbiting within the potential well of its host. As it orbits, it is subjected to strong tidal forces from the host, which cause it to lose mass. In addition the orbit itself evolves as the subhalo is subjected to dynamical friction which causes it to lose energy and angular momentum to the dark matter particles of its host. Whether a subhalo survives as a self-bound entity depends on its mass, density profile, and its orbit. As originally pointed out by Hoyle and first demonstrated using numerical simulations by Efstathiou & Jones, asymmetric collapse in an expanding universe produces objects with significant angular momentum. Numerical simulations have shown that the spin parameter distribution for halos formed by dissipation-less hierarchical clustering is well fit by a log-normal distribution, the median and width of which depend only weakly on halo mass, redshift, and cosmology: with and . At all halo masses, there is a marked tendency for halos with higher spin to be in denser regions and thus to be more strongly clustered. The visible disk of the Milky Way Galaxy is thought to be embedded in a much larger, roughly spherical halo of dark matter. The dark matter density drops off with distance from the galactic center. It is now believed that about 95% of the galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the galaxy's matter and energy in any way except through gravity. The luminous matter makes up approximately 9×1010 solar masses. The dark matter halo is likely to include around 6×1011 to 3×1012 solar masses of dark matter. A 2014 Jeans analysis of stellar motions calculated the dark matter density (at the sun's distance from the galactic centre) = 0.0088 (+0.0024 −0.0018) solar masses/parsec^3. Milky Way rotation curve.
OPCFW_CODE
Connell D’Souza is back guest-blogging and tells us about object detection in MATLAB. A few weeks ago, I visited Florida Atlantic University’s Team Owltonomous, who compete in RoboNation student competitions like RoboBoat, RobotX and from 2019 onwards RoboSub as well! Our discussions spanned a range of topics including designing object detection algorithms in MATLAB. Object detectors are critical to allow an autonomous system to identify what is in its surroundings. The team thought the workflow would help reduce the time needed to develop object detectors given their 1-year development cycle. So, I thought I would share some of our discussions in this post! The code discussed in this example can be found in this File Exchange entry. What is an Object Detector? An object detector is a computer program that employs computer vision, image processing and/or artificial intelligence algorithms to detect features of interest in images or a video stream. As Sebastian argues in this post about sensors for autonomous systems, a camera is a cheap and important perception sensor employed by autonomous systems. You can use an object detection algorithm to make sense of what your camera “sees”. I like to classify object detectors into 3 broad categories based on the technology used: - Classical Computer Vision: Employs classical computer vision techniques such as image segmentation and feature detection and matching to identify objects of interest. Features could include colors, shapes, edges, etc. Check out our online tutorial series on Computer Vision in MATLAB to learn more. e.g. Color Thresholding, Blob Analysis, Histogram of Gradients, Speeded-Up Robust Features - Machine Learning: Machine learning is an effective way to classify data. These detectors use classical computer vision algorithms to extract features or data points from the image and then employ machine learning techniques like support vector machines to classify the features. e.g. Cascade Object Detector (Viola-Jones Algorithm), Aggregate Channel Features (ACF) - Deep Learning: Deep Learning detectors use data in the form of labelled images to teach a convolutional neural network (CNN) features of interest. You can train a network from scratch or perform transfer learning on pre-trained networks. Check out the Deep Learning Onramp to learn how you can get started! e.g. YOLO v2, R-CNN, Fast R-CNN and Faster R-CNNs We will discuss designing an ACF Object Detector which is a machine learning detector. However, as you will see, by replacing a few functions and with the right compute power you can follow the same process for deep learning-based detectors as well. Design that Detector The workflow for using ground truth for object detection is shown in the graphic below. I will use the next few sections to explain them briefly. Generating Ground Truth Ground Truth refers to information provided by empirical evidence or observation. In our case this is a set of labeled images. Labeled images contain, images, object class descriptors like bigRedBuoy, smallGreenBuoy, as shown above and locations of regions of interest (ROIs) in those images. The designer needs to supply this dataset to train the detector. There are many publicly available, open-source labeled data sets, but there could be a chance where no dataset is available for your specific application. This will warrant you to create your own ground truth. MATLAB provides you with a tool – Ground Truth Labeler to automate this process. This app gives you an easy way to label rectangular ROI’s, polyline ROIs, pixels, and scenes. You can also automate this process using built-in automation algorithms or providing your own algorithm. Once you have finished labeling the images or video you can export the ground truth as a ground truth data object. Watch the video below to learn how you can automate ground truth labelling! Training Object Detectors Now that you have a labeled dataset or ground truth, Computer Vision Toolbox provides built-in training functions that can be used to train machine learning or deep learning detectors. The graphic below shows the functions and the workflow that can help you train these detectors. The trainACFObjectDetector function is used to train an ACF object detector which as we discussed earlier is a machine learning detector. This function call can be replaced by other similar functions like - trainRCNNObjectDetector – R-CNN deep learning object detector - trainFastRCNNObjectDetector – Fast R-CNN deep learning object detector - trainFasterRCNNObjectDetector – Faster R-CNN deep learning object detector One caveat is you will need to provide some training options that are specific to these deep learning detectors which you can read about it in the documentation links above. Now that you have a trained detector you can use the detect method of the object detector object to identify the object of interest an image! Evaluating Object Detectors Once you have a trained detector and have visually confirmed that it is detecting what it is intended to, you may want to evaluate these detectors with some numerical metrics. This could be in the form of a confusion matrix or other common metrics like Miss Rate and Precision. MATLAB offers built-in functions to carry out these evaluations. When it comes to the miss rate and precision, an important parameter used is threshold. The threshold parameter determines the extent of overlap of the bounding box around an object of interest given by the detector over the bounding box of the same object in the ground truth. It is calculated as the Intersection over Union (IoU) or Jaccard index. As shown in the plots below for the same detection and ground truth data, changing the value of the threshold parameter drastically changes the value of the evaluation metric. Pick an overlap threshold value that best suits your application and keep in mind a higher threshold means you are expecting your detection results to overlap a larger area of the ground truth. When selecting a dataset to test your detector make sure you use a data set that is independent of the one used to train the detector. This will help ensure you are not overfitting your detector to a particular dataset. Watch this video below to see how this code works! Generate C/C++ Code To use this detector on your robot’s/vehicle’s computer, you will need to convert the MATLAB code to a low-level language like C/C++, that can be executed on an embedded system. In R2019a, we added code generation support for some object detectors discussed above, including the ACF object detector that is used in this example. To generate C/C++ code, MATLAB code must be packaged in a function. The ACFObjectDetector object, cannot be passed through the function interface as an argument in the generated code as it is a MATLAB object, you will have to construct the object inside the function by calling the constructor method of the acfObjectDetector class with the Classifier and TrainingOptions properties as arguments. This can be done by converting the object into a structure with the properties as fields and save it as a MAT-file as shown below. s = toStruct(detector); save('detectorCodegen.mat','-struct','s','Classifier','ModelName',... 'NumWeakLearners','ObjectTrainingSize','TrainingOptions') Next load the MAT-file inside the function using the coder.load function as shown below and call the constructor. You will want to declare it as persistent, so it is stored in memory and does not need to be constructed at every call to the function. Once you have modified your code you can follow the MATLAB Coder app workflow to obtain C/C++ code. Not familiar with the MATLAB Coder app? Check out this tutorial series on code generation. function [boxes,scores ] = ACFDetector(img) persistent s detector s = coder.load('detectorCodegen.mat'); detector = acfObjectDetector(s.Classifier,s.TrainingOptions); [boxes, scores] = detect(detector,img);) To conclude, I would encourage you to download the code and try it out. See how a few lines of MATLAB code can help you develop robust object detectors as well as convert it into C/C++ . 댓글을 남기려면 링크 를 클릭하여 MathWorks 계정에 로그인하거나 계정을 새로 만드십시오.
OPCFW_CODE
Here’s a rover project that has plenty of power (translated) to go places. This is true not only of its locomotive capability, but processing power as well. The RC car used here (translated) is not overly expensive, but offers a lot of versatility. It’s got front and rear steering via two servo motors, as well as independent drive motors for each end. The frame also offers an advanced suspension system that lets the vehicle flex to keep as many wheels on the ground as possible. It’s a great find if you don’t want to start off your project bogged down in the hardware design. On the control side of things a Beagle Board has been choosen. The demo after the break shows it controlling an added turret servo, as well as the drive mechanism controlled via a keyboard. These are driven through the embedded Ubuntu image running on the board. This should provide plenty of processing power to add obstacle avoidance and autonomy routines in future versions. Continue reading “RC car and Beagle Board mate for a versatile robot build” This daughterboard lets [Matt Evans] drive a laptop LCD using a Beagleboard. Apparently the Beagleboard gained a VGA header when it moved to revision C but [Matt’s] working with revision B4 which is why he had to do all of that ninja soldering with the blue wires. The driver board itself is a thing of beauty, hosting a DS90C363 LVDS serialiser as well as some buffer chips that handle level conversion for it. He’s also included an ATmega48 so that he has some options for future improvements. The LCD is mounted in a custom acrylic case, with Beagleboard and driver board taped to the back of it. There’s RS232 and a USB hub which opens up the possibility of using a WiFi dongle for communications. So far he doesn’t have much functionality other than displaying images on the screen but there is some talk about using a touchpad for control. We’d love to see a touchscreen overlay, transforming the build into a proper ARM-based tablet. What do you do after you make a BeagleBoard graphing calculator? [Matt] over at Liquidware Antipasto made a BeagleBoard Elastic R Cluster that fits in a briefcase. Ten BeagleBoards, are connected to each other though USB to ethernet adapters and a pair of ethernet switches connected to a wireless router. The cost for this cluster comes in around $2000 and while consuming less than 40 watts of power, out-paces a $4500 laptop. How might you use this cluster? What improvements would you make? Continue reading “BeagleBoard Cluster” It looks like we missed the boat on this one but just in case you missed it everywhere else on the Internet, last Saturday [Matt Stack] introduced the world to a completely open source calculator. This marries two heartily tested open source projects; the R Project for Statistical Computing and the Beagleboard. The hardware side of things is very similar to that Linux tablet from back in June. It uses a stock Beagleboard with the BeagleTouch module. Why do we care? First off, don’t forget what’s under the hood. That ARM processor kicks the 6 MHz Z80 processor found in TI’s calculators to the curb. The R language is a boon as well, offering plots of almost limitless quality and allowing extensibility that can’t be equaled with the current non-open offerings. But mostly because it’s a hack. We like seeing software run on hardware it wasn’t intended for. [Jason Statham] [Martin Magnusson] wrote in to tell us about his adventure in building a wearable computer. The device in its current state is a Beagleboard running Angstrom Linux tethered to an iPhone for internet. A bluetooth keyboard allows for input, while output is displayed on monocle-ized Myvu. And last but not least, the entire setup is powered by 4 AA batteries for 3 hours of life. Its not as small as some of the wearable computers we’ve seen before, but if you wanted to whip out your own it sure takes a lot less soldering. Imagine a tiny little device that you velcro to the back of your TV that delivers all of the media found on your home network. We’ve been dreaming about that since we saw early working examples of XBMC running on a Beagleboard. We’ve heard little about it since then but now there’s cause for hope. XBMC optimization for the Beagleboard has been approved as a Google Summer of Code project. The fruits of these projects tend to take a year or so to ripen, but we don’t mind the wait. [Topfs2] is the student coder on the project and will be posting weekly updates as well as idling in IRC so if you’re interested in lending a hand with testing or words of support you should drop him a line. [Beagleboard photo: Koenkooi]
OPCFW_CODE
Top Cyber Asset Attack Surface Management (CAASM) Tools Table of contents Then, what does value mean in an organization? Almost everything used to operate a business has value to an organization. - Data - you store in cloud services like AWS and Azure. - Communication - by tools like Slack and Zoom. - Automation - by code repositories like GitHub and Bitbucket. and more tools can be defined as where the value is created in an organization. In terms of cybersecurity, these values must be monitored to detect if any security vulnerabilities are reaching them. Thereby, vulnerabilities can be solved immediately, or preventive actions can be taken before any value gets defective or lost, such as accidental deletion of important data or unauthorized access to internal resources. This article will help you compare the top CAASM tools and their key features. How does CAASM help you keep the values of your organization safe? Modern companies have cloud services and tens, even hundreds of tools in their Tech Stack that generate assets continuously. So, the number of assets can reach tens of thousands. Security teams need to check these tools and included assets to answer any concerns or detect vulnerabilities. So, how can security teams keep on top of such a wide range of assets with robust security processes? It’s almost impossible to manage them without a centralized and automated manner. There, CAASM comes into play by integrating your tools, enabling you to discover, monitor, classify, and secure your assets from one hand. CAASM is a technology that helps organizations bring together and standardize their all assets, which are typically spread out across a wide IT environment. It enables users to monitor, query, and centralize internal and external data, regardless of where it is stored, and detect vulnerabilities. Now, we’ll look into which companies offer what solutions and their key features. Top CAASM tools to consider Axonius is a cybersecurity company that focuses on cyber asset inventory and management. Its platform collects data from an organization's security stack to provide insights into SaaS applications, revealing vulnerabilities and enabling informed decisions to improve asset security. Axonius offers automated response processes to help IT teams with remediation for non-compliant apps. - Integrations with security and IT management tools. Collects data from devices with an IP address, including workstations, servers, containers, and IoT devices. - The Query Wizard allows for searching asset inventory, viewing asset usage, and identifying vulnerabilities and non-compliant assets. - Granular remediation policies for defining the specific actions that should automatically trigger when an asset strays from security or compliance policies. - Identifying federal agency security coverage gaps Resmo offers cloud-native teams a comprehensive solution for continuous cyber asset visibility and security. By enabling SQL-based data queries across a range of Cloud and SaaS providers, including AWS, GCP, Atlassian Stack, Okta, Google Workspace, and more, Resmo empowers customers to stay on top of changes. Through consolidation of users, vulnerabilities, repositories, and other key constructs, Resmo provides valuable insights through user-friendly dashboards and automates compliance checks for common frameworks like CIS benchmarks, alerting customers to potential security vulnerabilities. - SaaS Discovery to identify automatically which tools employees use and SaaS vulnerabilities such as weak passwords, overly permissive access rights, and Shadow IT through native integrations and browser extensions. - Automated security and compliance checks with rules - Free-text and SQL combined easy querying for flexible and in-depth asset analysis - One-click integration with 70+ tools and multiple cloud services - Rules and asset history monitoring to analyze every single change retrospectively You can sign up for free to discover your cyber attack surface. JupiterOne is a cloud-based CAASM tool that provides security teams with a platform for managing the security of their digital assets. JupiterOne enables security teams to discover, monitor, and manage their entire asset inventory, including cloud infrastructure, applications, and users. It identifies and addresses security risks by providing period-based visibility into asset inventory, identifying vulnerabilities and misconfigurations. - Built-in compliance frameworks (including NIST, HIPAA, PCI, and SOC2) - Automated compliance assessments - Visualizes asset inventory data, allowing IT teams to view asset details. - Automated remediation workflows Brinqa provides an enterprise-level vulnerability and threat management solution that enhances risk visibility and threat intelligence with a comprehensive view of their cyber assets. It helps identify, prioritize, and remediate vulnerabilities across an organization's IT infrastructure. Brinqa also offers capabilities for compliance management and threat intelligence integration. - Risk-based prioritization of vulnerabilities - Automated ticket creation and closing to manage vulnerabilities - Automated remediation workflows - Asset discovery and mapping, including hardware, software, network infrastructure, cloud services, Panaseer specializes in cybersecurity and strongly emphasizes monitoring Continuous Controls (CCM). They prioritize fixing significant issues in the context of the business and offer vulnerability management for internal policies and regulations. They help reduce business losses and improve cybersecurity by continuously monitoring the security posture. Additionally, they provide regular audits of the controls to ensure continuous improvement. - Continuous monitoring for compliance and audits - Automated reporting - Scalability for large volumes of data from multiple sources - Integration options with EDR tools and SIEM systems 6. Noetic Cyber Noetic Cyber is a cybersecurity company that helps businesses enhance their security position by managing their assets and controls. Their platform concentrates on facilitating businesses to comprehend the connections between their cyber assets. By doing so, they can anticipate and mitigate the propagation of possible cyber-attacks by evaluating vulnerabilities in the broader framework of their digital setting. - Asset visualization in a graph database for detailed analysis - Asset relationship mapping for a better understanding - Customizable dashboards to enrich reports - Drag & drop visual editor to create comprehensive workflows. Lansweeper is a software for IT Asset Management that has a specialization in Network Discovery. Through a network scan, the software generates an inventory of all devices, including hardware, software, warranty information, and login information without the need for an agent. - Network discovery, IP scanning, and credential-free recognition (CDR) - API-based integrations with CBDM, ITSM, SIEM, and SOAR tools - Pre-built and customizable dashboards for visualizing asset data - Asset Radar to detect newly connected devices in the network runZero is a platform for discovering networks and creating an inventory of assets. It assists in detecting managed and unmanaged devices, on-premises and cloud-based assets, IT and OT infrastructure, and endpoints both at work and at home. With the help of integrations for mobile device management (MDM), endpoint detection and response (EDR), cloud service providers, SIEM, and CMDB, the platform enables the augmentation of your inventory. - Scanning assets including RFC1918 subnets and unknown subnets - Asset ownership tracking - Reporting and analysis of assets - Network topology through IP address management VArmour is a company that specializes in cybersecurity and emphasizes application relationship management. Its CAASM platform is designed to recognize internal assets on the network and map their relationships. By identifying and visualizing the relationships between applications, VArmour enables businesses to enhance their security position and minimize their vulnerability to cyber threats - Application asset discovery and categorization - In-built isolation, segmentation, and application controls - Machine learning analysis of application behavior and user access - Application relationship mapping 10. Sevco Security Sevco Security is a platform that provides cloud-native security asset intelligence services to enterprises. The platform focuses on assisting organizations with IT asset management, including IP addresses, configurations, device geolocation history, and maintaining compliance. Sevco Security's platform enhances an organization's security posture and IT processes by increasing visibility and providing reliable data. - Asset telemetry for observed changes by any tools - Device geolocation history - Asset IP address and configuration history - CMDB maintenance, automated deduplication of inventory reports from your contrasting sources CAASM tools are essential in providing organizations with insights into their security posture by discovering, monitoring, classifying, and securing their assets in a centralized and automated manner. Although it may seem impossible to manage tens of thousands of assets without a robust and integrated system, CAASM tools come into play. While all the CAASM tools discussed are essential, Resmo offers a comprehensive solution for managing cyber asset attack surfaces for cloud and SaaS environments. Resmo provides SaaS Discovery, automated security and compliance checks, rules and asset history monitoring, and one-click integration with 70+ tools and multiple cloud services, among other features. By signing up for free, businesses can effectively discover their cyber attack surface and protect their assets.
OPCFW_CODE
The isoelectric point, or pI,represents a point of balance for a molecule, where the external surface charge is a net zero. This factor governs electrophoretic mobility in proteins and also plays a role in identifying peptides from mass spectral proteomics data. pI depends on a number of factors, including amino acid sequence, post-translational modifications (PTMs) and presence of side chain—all of which can alter surface charge and behavior depending on the pH of the environment. Various methods for predicting pI in denatured proteins exist, and most base this calculation on amino acid sequence with reference to pKa values recorded for ionizable constituents. Although these predictive methods exist, their performance can be variable and may skew ensuing results. Audain et al. (2015) compared and contrasted five tools available to researchers for determining pI on the basis of amino acid sequence.1 The researchers benchmarked algorithm performance, comparing results obtained against public data sets to show how well these predictive tools performed. The researchers chose the following tools to undergo benchmarking: - Iterative: calculated from amino acid sequence - Cofactor: calculated with correction factors according to amino acid position and adjacent charged residues - Bjellqvist: calculated according to pKa and amino acid position - Support Vector Machine (SVM): calculation based on amino acid sequence and Amino Acid Index database (AAindex) data - Branca: calculation according to correction factors for position, influence of neighboring groups, and statistical corrections for presence and nature of side chain groups Audain et al. note that in order to avoid bias in reporting, they did not optimize the methods used for evaluation for any of the tools under investigation. First, the team constructed an R-package, a collection of programs, functions and data written in statistical programming language R, as a framework for reproducible analysis within which to examine performance of the various algorithms. In this way, the benchmarking process would allow for direct comparisons through reference to correlation and root-mean-square deviation (RMSD) evaluation. The researchers then calculated pI values using each of the tools under investigation before comparing the theoretical results obtained against those publicly available. Audain et al. used two databases for reference; the first, the PIP-DB (protein isoelectric point database) contains a comprehensive record of protein pI data. The second is made up of values obtained for the tryptic proteome generated from the cellular fraction of Drosophila Kc167 cells. For the theoretical values generated for proteins, the team first grouped the results into those with variable pIs and those with only one unique pI. From this analysis, they found that most proteins do not possess a unique pI. From the comparison between observed and theoretical, the researchers found a mostly poor performance from all five tools, with R2 values ranging between 0.61 and 0.15. The best performance, with the lowest RMSD of 1.28, came from the SVM calculations. When considering the data from peptides, the researchers found much better performance, with high correlation between predicted and observed pI values (R2 = 0.96). They found the lowest RMSD with SVM predictions (0.21). Looking at peptides modified by PTMs, the team saw that the best predictions came when the algorithm included the effect of the PTM alongside its overall theoretical calculation. Although Audain et al. found poor benchmarking performance for the five methods investigated, they make some suggestions arising from the process: - Some algorithms are suitable for in silico prediction - Machine-learning algorithms function best, although the ability depends on training and quality of training data The authors also make further suggestions based on the results for the ideal conditions under which the algorithms function best, and have also made software and data freely available for scrutiny. 1. Audain, E., et al. (2015) “Accurate estimation of isoelectric point of protein and peptide based on amino acid sequences,” Bioinformatics, doi: 10.1093/bioinformatics/btv674. Post Author: Amanda Maxwell. Mixed media artist; blogger and social media communicator; clinical scientist and writer. A digital space explorer, engaging readers by translating complex theories and subjects creatively into everyday language.
OPCFW_CODE
import Foundation @testable import Teller import XCTest class DataState_StateTest: XCTestCase { let defaultRequirements: RepositoryRequirements = MockRepositoryDataSource.MockRequirements(randomString: nil) // MARK: - No cache, equatable /** All of the given states: 1. No cache 2. No cache, fetching 3. No cache, error during fetch 4. Cache 5. Cache empty 6. Cache, just completed first fetch 7. Cache, fetching 8. Cache, successful fetch (not first) 9. Cache not successful, error 10. None (cannot test, it's a fatal) */ // 1 func test_state_givenNoCache_expectEqual() { let actual = CacheState<String>.testing.noCache(requirements: defaultRequirements).state switch actual { case .noCache: break case .cache: XCTFail("should be no cache") } } // 2 func test_state_givenNoCacheFetching_expectEqual() { let actual = CacheState<String>.testing.noCache(requirements: defaultRequirements) { $0.fetchingFirstTime() }.state switch actual { case .noCache: break case .cache: XCTFail("should be no cache") } } // 3 func test_state_givenNoCacheErrorDuringFetch_expectEqual() { let fetchError = FetchError() let actual = CacheState<String>.testing.noCache(requirements: defaultRequirements) { $0.failedFirstFetch(error: fetchError) }.state switch actual { case .noCache: break case .cache: XCTFail("should be no cache") } } // 4 func test_state_givenCache_expectEqual() { let fetched = Date() let givenCache = "cache" let actual = CacheState<String>.testing.cache(requirements: defaultRequirements, lastTimeFetched: fetched) { $0.cache(givenCache) }.state switch actual { case .noCache: XCTFail("should be no cache") case .cache(let cache, let cacheAge): XCTAssertEqual(cache, givenCache) XCTAssertEqual(cacheAge, fetched) } } // 5 func test_state_givenCacheEmpty_expectEqual() { let fetched = Date() let actual = CacheState<String>.testing.cache(requirements: defaultRequirements, lastTimeFetched: fetched).state switch actual { case .noCache: XCTFail("should be no cache") case .cache(let cache, let cacheAge): XCTAssertNil(cache) XCTAssertEqual(cacheAge, fetched) } } // 6 func test_state_givenCacheJustCompletedFirstFetch_expectEqual() { let fetched = Date() let actual = CacheState<String>.testing.noCache(requirements: defaultRequirements) { $0.successfulFirstFetch(timeFetched: fetched) }.state switch actual { case .noCache: XCTFail("should be no cache") case .cache(let cache, let cacheAge): XCTAssertNil(cache) XCTAssertEqual(cacheAge, fetched) } } // 7 func test_state_givenCacheFetching_expectEqual() { let fetched = Date() let givenCache = "cache" let actual = CacheState<String>.testing.cache(requirements: defaultRequirements, lastTimeFetched: fetched) { $0.cache(givenCache) $0.fetching() }.state switch actual { case .noCache: XCTFail("should be no cache") case .cache(let cache, let cacheAge): XCTAssertEqual(cache, givenCache) XCTAssertEqual(cacheAge, fetched) } } // 8 func test_state_givenCacheSuccessfulFetch_expectEqual() { let fetched = Date() let givenCache = "cache" let actual = CacheState<String>.testing.cache(requirements: defaultRequirements, lastTimeFetched: Date()) { $0.cache(givenCache) $0.successfulFetch(timeFetched: fetched) }.state switch actual { case .noCache: XCTFail("should be no cache") case .cache(let cache, let cacheAge): XCTAssertEqual(cache, givenCache) XCTAssertEqual(cacheAge, fetched) } } // 9 func test_state_givenCacheFailedFetch_expectEqual() { let fetched = Date() let givenCache = "cache" let fetchError = FetchError() let actual = CacheState<String>.testing.cache(requirements: defaultRequirements, lastTimeFetched: fetched) { $0.cache(givenCache) $0.failedFetch(error: fetchError) }.state switch actual { case .noCache: XCTFail("should be no cache") case .cache(let cache, let cacheAge): XCTAssertEqual(cache, givenCache) XCTAssertEqual(cacheAge, fetched) } } class FetchError: Error {} }
STACK_EDU
Linux file ACL and secondary groups The Problem: CentOS does not seem to look at secondary groups when using ACL's on a folder or file Scenario: CentOS 6 basic install, uses LDAP accounts to authenticate users. I am trying to setup fairly complex permissions on some folders. I have ensured that the file system is mounted with ACL support and determined that LDAP users are able to log in correctly. Steps to Reproduce: As a test I have a simple folder structure. The folder test1 is owned by root and has 770 permissions, I have added another group to that folder setfacl -m g:testgroup:rwx test1/ The getfacl output for the folder looks like this: getfacl: Removing leading '/' from absolute path names # file: share/test1/ # owner: root # group: root user::rwx group::rwx group:testgroup:rwx mask::rwx other::--- The user andrew belongs to the domain group and testgroup as shown by groups andrew. The group domain is the users primary group. If the user andrew tries to read anything located in test1 a permission denied error was shown. If however the users primary group is changed to testgroup the user can then interact with the contents of the folder. Can anybody tell me what is going on here and if there is a way to get the expected behaviour? EDIT This appears to be a problem related to LDAP. I just tested using local user accounts and everything works as expected. Have you done a "getent group" and verified no group is actually there TWICE and DIFFERENT? Also, check "id" for the users involved that group lists have not been truncated due to implementation limits (this used to be a metric b.... with NIS and might also be a problem with your LDAP implementation). Sorry for being able to offer only generic advice there... @rackandboneman 'getent group' does not have any groups repeated. And 'id' of a user shows they belong to the relevant groups, nothing appears to be truncated. As you say this problem is related to the use of LDAP for user information. Your Centos6 machine is configured in a way that is incompatible with the LDAP server, so that when it tries to get the list of supplementary groups the user belongs to, it doesn't find anything. Unfortunately there are several standards for how to interpret LDAP attributes relating to POSIX groups - rfc2307, rfc2307bis, IPA Centos 6 uses SSSD for managing interwork with remote directories and authentication databases. The default settings for SSSD are to use rfc2307. You will probably find your LDAP server is using rfc2307bis. We have a Centos 5 directory server, it was configured by default for rfc2307bis. As a further complication, our C5 directory was using the attribute 'uniqueMember' instead or 'member' for group members. To fix it, edit /etc/sssd/sssd.conf and add the following lines: ldap_schema = rfc2307bis ldap_group_member = uniqueMember You might also like to refer to the following: man pages for sssd.conf and sssd-ldap http://www.couyon.net/1/post/2012/04/enabling-ldap-usergroup-support-and-authentication-in-centos-6.html https://bugzilla.redhat.com/show_bug.cgi?id=580402 Have you tried looking at the numeric ids of both the groups the user belongs to, and the ACL of the directory? Try getfacl -n test1 and compare the groups listed with the output of id -G andrew. See if there is any discrepancy there and try resolve it. The groups do match up, as do the numeric ID's so I don't think that is the root issue. Could it be that the user belongs to too many groups? Is this reproducible with a user that is a member of only a couple of groups?
STACK_EXCHANGE
Identities on the chain The different identities on the BancChain™️ The Validators of the BancChain™️ will be able to make a NFT validator whereas they choose a % of the block rewards and transaction fee rewards that will be distributed to the NFT holders. Bancc™️ & Foundation The users of the BancChain will be able to stake their coins towards one or several validators. They will receive 55% of the block rewards and 10% of the Fee Rewards. The purpose of staking is to offer an incentive for people that wants to conduct free transactions on the BancChain™️ but also to make an validator eligible for having the needed collateral to run a validator. The validators of BancChain™️ is the one that are validating and propsing new blocks which contains transaction information. They will receive 6.99% of the block reward and 80% of the Fee Rewards. There will be a maximum of 300 validators on the chain and these need to have the following conditions meet to be eligible for becoming one; - 1.A total collateral of 3,000,000 million banc is needed by either; - 1.Owning the collateral yourself - 2.Users staking towards you - 2.Go through a KYC/KYB documentation check - 1.In the future this process will be automated and will only give us (Bancc) an "Yes, this person is OK" or "No, this person is not OK". - 3.Signing an legal document agreeing to the terms and conditions of running a validator on BancChain™️ - 4.Hosting a server with the requirements stated in the documentation at - 6. Receives a maximum of the 50% block rewards for (NFT Validators only) The Bancc™️ project and BanccFoundation™️ will receive 20% and 9% of the Block Rewards and 5% and 0% of the Fee Rewards respectively. The Bancc™️ project will receive and use these rewards for maintaining and upgrading the ecosystem. The foundation will distribute these rewards to different charities across the world to not only help low-income countries have a technical and financial starting stone but to give them opportunities for new jobs and boost their local economy. The merchants of the BancChain™️ will help keeping the connectivity of the chain intact and secure and by integrating the infrastructure with Point-of-Sale terminals for facilitating transactions for fiat currencies and crypto currencies both online and offline. The merchants will receive 9% of the Block Rewards and 0% of the Fee Rewards. There will be some implementations used towards the merchants to limit dishonest use of the products, such as - KYC/KYB Implemented - Rewards based on transactions facilitated - & more...Will be fully disclosed within the release of merchant specific products.
OPCFW_CODE
I am planning to run a mixed environment consisting of an Oracle Database, with some GUI screens running on Progress through Oracle Data Server and a batch job coded in Java running directly towards the Oracle DB. My question is how to handle the progress_recid when when coding in Java. The progress_recid is handled automatic when running from Progress, but what if I insert a row in Java? How can I retrieve the "next" progress_recid to use? What happens if I delete a row and the progress_recid sequence is broken? I have not found any documentation on this and hope somebody could help me. Have a look at ROWID in the Oracle Data Server documentation. The same goes with a Progress database. RECID is supported mainly for backward compatibility although, I think, it's still used in the schema files. What's the Progress version ? BTW what's the reason for the setup ? why not use a Progress database and JDBC if it's needed. Thanks for your reply. I still does not think that this covers my questions. I need to know: 1. How can I obtain a new progress_recid when inserting a new row via Java/JDBC 2. What happens if I delete a row and the recid sequence is broken? I have read the documentation and can not see that it covers these questions. The reason for running an Oracle database is that we are migrating away from Progress, but are forced to run the Progress GUI in the migration phase. So, what lapse in judgment is responsible for that decision? The progress recid is not a sequential number. It is a binary combination of the block number and the record's offset within the block. If there are unused slots in the block, there will be unused recids. Also, if you delete records from your database, the free space will be reused, so your newly created record will not always have the "highest" recid. With the Oracle Dataserver, there is an extra field in each table that holds the recid. If you create a record without going through the Dataserver, this field won't get a value. The question I have to ask is: do you need it? If you don't use the recid function anywhere in your Progress ABL code, this field will never be used. One of our customers decided to create the tables directly in Oracle and didn't even create the recid fields and they work very happily through the Dataserver. Possible solutions to your problem: - you could write your Java code to access your existing database update logic through the Java Open Client functionality - if you use the recid function in your ABL code, you could change them for ROWID functions (NOTE: this returns a different data type) and forget about the recid columns Let us know if you have further questions. The progress_recid values should be grabbed from the sequence name _seq. The progress_recids do NOT need to be sequential - they are just there to ensure progress uniqueness, so do nothing when deleting a record. As one of the other replies points out you could also adjust your Progress code to no longer required progress recids - note that this can be a tricky task. For starters you should have at least one unique index on all your tables (any decent table should anyway) and then you will need to check for recid use, also be careful when using rowids (as replacement for recid) in can-find functions, these have sometimes broken down on us. Background: we removed progress_recid from our SQL Server database, on Oracle we've still got it sitting there. I will not be using the recid directly from any code, but I thought the GUI applications "generated" by Progress relied on these, to be able to navigate forward and back in e.g. table grids? All other code (except GUI) accessing the data will be Java directly towards the Oracle database, and will not use the recids... Message was edited by: Stein Rune Risa Anyone who know this? Does Progress rely on values in progress_recid to be able to scroll forward and backward?
OPCFW_CODE
CSS :nth-child custom equation When using CSS pseudo class, I could select every number of x elements and I don't have to target it manually....but when using 3n+1 equation the number will always start from 1, what if I want to start from 2 onwards? for example: .abc(3n+1) {background: red;} would select the 1,4,7,10 and so on but what if I want to skip the first and select the second onwards, like so, 2,4,7,10.... Is there a equation for this? 2,4,7,10 isn't any pattern i don't think there is a solution!!! would help me even if there is one :) If you meant '4,7,10 and so on', just write .abc(3n+4), If you meant '2,5,8,11' write .abc(3n+2) You could use not() selector to prevent selecting the :first-child .abc:nth-child(3n+1):not(:first-child), .abc:nth-child(2) { background: red; } Or you could reset the applied styles to the first div by overriding: .abc:first-child { /* default styles... */ } JSBin Demo. Just for clarification CSS3 nth-child() or nth-of-type() selectors don't work for combination of element.class, they look for the element itself. Considering that, you should make sure that all the .abc elements are siblings. Or wrap all the elements by a wrapper called .abc then select the element children as follows: .abc element:nth-child(3n+1):not(:first-child), .abc element:nth-child(2) { background: red; } pseudo-classes do not work on classes only elements. I did not downvote. @Paulie_D I know, I just added this snippet as a demo. Dear Downvoter any feedback is appreciated, if you could tell why? @Paulie_D In my Example, there's no hierarchy in the DOM. and this works. Check the online demo. I did...now change one of the classes from .abc to something else and the demo fails. nth-of-class is not possible in CSS although via JS/HQ it can be done. @Paulie_D the expected Markup structure should be explained by the OP himself. It's just a demo as I said couple of times before. The option(s) you want don't fit any specific pattern so you would have to use two selectors. Firstly, however, you cannot apply nth pseudo-classes to actual classes..only elements. If you have a menu with 10 li...say <ul> <li><a href="">1</a></li> <li><a href="">2</a></li> <li><a href="">3</a></li> <li><a href="">4</a></li> <li><a href="">5</a></li> <li><a href="">6</a></li> <li><a href="">7</a></li> <li><a href="">8</a></li> <li><a href="">9</a></li> <li><a href="">10</a></li> </ul> then you would need two selectors li:nth-child(3n+4), li:nth-child(2) { background-color: red; } :nth-child(3n+4) selects every third item starting with the 4th. :nth-child(2) slects just the second item Codepen Demo I dont see an equation to achieve this but this hack should work. .abc:nth-child(3n+1), .abc:nth-child(2) {background: red;} .abc:first-child {background: green} /* say the original background for the first element was green*/ Where did you come up with :second and :first? There aren't any such selectors in CSS. Simplest way what i think better is .abc li:nth-child(3n+4),.abc li:nth-child(2){//supposing abc to be class of ul and has li // what ever goes in here.. } working fiddle pseudo-classes do not work on classes only elements. @Paulie_D: They do work on classes, just not often as the author would expect.
STACK_EXCHANGE
Gnucleon version 0.1 The code for this game (the file "Gnucleon.java") is released into the Public Domain because I can't be botherered to run around answering questions about licensing for a little experiment I spent a couple of How to get it to work Gnucleon is made in Java, which I know I know, is not yet Free, but it is the language taught throughout Sheffield University so it is what I know. If the already compiled version doesn't work for you (running the file "Gnucleon" should do it) then you may have to compile it yourself. The ways of doing this vary, but the (not Free yet) Sun way of doing it is to have the Java Software Development Kit (SDK) and run "javac Gnucleon.java". Then you can try running the "Gnucleon" file again. If that doesn't work... Well, I am a complete n00b and that is why it is version 0.1. Feel Free to improve it if you want, it is in the Public Domain so therefore it is yours :) How to play The current setup has 2 human players and a board of 12x10 squares. These can be changed by editing the Gnucleon.java file. Playing is relatively simple: The squares on the board represent atoms. The atoms start off with no nucleons (the bits in the middle, surrounded by the spinny bits), hence the number zero on them. Players take turns adding nucleons to the atoms, which is done by clicking on a square (atom). Any atoms a player adds to become owned by them. Players can only add to their own atoms, or empty ones. Once atoms reach a critical mass they explode. The critical mass of an atom depends on the number of neighbours it has. Atoms in the corners only have 2 neighbours, so their critical mass is 2. Atoms around the edges have 3 neighbours, so their critical mass is 3. The rest of the atoms have 4 neighbours so their critical mass is 4. When an atom explodes it sends one nucleon to each of its neighbours. These neighbours become owned by player who caused the explosion. If the neighbours become critical due to receiving this new nucleon then they explode, causing chain reactions. If a player loses all of their atoms then they lose the game. (This is not yet detected, so you'll have to look for it yourself) If extremely huge chain reactions happen then some errors appear on the command line and the reaction quits early without switching to the next player. There is no visual feedback when large chain reactions are being A decent GUI is being worked on. Honest. I hope that this at least doesn't detract from your happiness at all, even if it doesn't add to it :) Thanks for reading, hope you have fun Clone this repository using: git clone http://chriswarbo.net/git/java-gnucleon.git - master: Initial commit Chris Warburton <email@example.com> Wed Oct 14 06:58:05 PM UTC 2015 Generated by git2html.
OPCFW_CODE
While doing more research on the “Impostor Syndrome,” I came across a super terrific blog post about this syndrome among college students. It was written by Richard Felder, a chemical engineering professor at North Carolina State University, who observed this situation in so many of his students. The syndrome is rampant in the engineering and technical educational world. I remember it well. In my days as a mechanical engineering student at University of Notre Dame, I was drowning in the fear of being uncovered as a fraud. I didn’t know it at the time that it was this Imposter Syndrome, but I was terrified that I didn’t measure up. This blog post that I discovered was especially intriguing because the author didn’t just explain the impostor syndrome. He actually gave advice to other professors for how to help their students deal with it. As I read through it, I thought “Wow…I wish my professors had done this for me.” But more than that, the advice isn’t limited to just professors. It’s advice that any leader, manager, boss, coach or person in authority can use to mitigate the paralyzing imposter syndrome that is a real problem for the people in their sphere of influence. His advice is summarized in 4 simple steps, which I am rephrasing to make more universal, beyond the collegiate environment. 1. Acknowledge that the Impostor Syndrome exists. Read about it. Learn about it. Talk about it. Sharing about it and verbalizing it will bring the fear from the dark into the light, and in the light, it is emptied of its paralyzing power. 2. Affirm their abilities got them to where they are now, and their abilities will not magically, suddenly or unexpectedly vanish in the next millisecond. 3. Confirm that one mistake will not wipe out an entire history of success. The security of the free world in not in the balance, so it’s ok to not be perfect. Don’t fear making a mistake. Relax. Breathe. Think. 4. Encourage them to forgive themselves. If they do make a mistake, tell them it’s ok. Rebounding from a mistake doesn’t mean lost of dignity, reputation or confidence. Learn from the mistake. Make changes if necessary. Teach others so they avoid the mistake. Turn the mistake into something that generates power, not something that robs you of it. These are my paraphrases, but the overall thoughts mirror this professor’s. As a leader or manager, it helps us to know that many people in our sphere of influence feel like impostors. Every time I mention this syndrome, I get a chorus of “Yeah! I feel that too!” Try it yourself. Mention this to someone, and see what reaction you get. Then, try to use these four points to help them overcome their fear. Better yet, use them on yourself.
OPCFW_CODE
Latest Version? I have version <IP_ADDRESS>3 according to Help-About But at https://github.com/fernandreu/office-ribbonx-editor "To download the latest release, go to the following link: https://github.com/fernandreu/office-ribbonx-editor/releases/latest it takes me to <IP_ADDRESS>8 (August 13) I'm GitHub challenged, so where is the latest release located? Thanks Hi phossler, There are two types of ways of getting the tool (both mentioned in the main project page): From GitHub's releases page as you mentioned From Azure Pipelines The latter is a development build, in the sense that one new version gets automatically generated as soon as I make some changes to the master branch of the project. You usually get more bugs fixed in those, which is why I normally suggest people to download one of those after I have completed an issue they created (in your case, see this). The downside of those development builds is that, if I have just made recent changes to the tool, there might also be new bugs introduced that I have not fixed yet. I create releases at a much slower pace than development builds mainly because of this (and also to avoid annoying users with too many Version x.y.z available! messages :sweat_smile:). The first three digits (1.5.1) are manually changed by myself before pushing the changes to the repository. The fourth version digit (463 in your case) will automatically refer to the build ID. You can see the most recent builds here. The build ID appears in the URL of each build; 482 is currently the latest. I am just preparing for another release in the next few weeks, so the development build should be quite stable. You can only see builds from the last 30 days (or associated to a Release Pipeline in Azure, which is not the same as a GitHub Release), so build 463 is no longer in that list. Thanks for taking the time to explain 1.5.1.xxx is the latest ‘official’ release, and I have that 😊 Download / Build status To download the latest release, go to the following link: https://github.com/fernandreu/office-ribbonx-editor/releases/latest https://github.com/fernandreu/office-ribbonx-editor/releases/latest To download the latest development build instead, go to the Artifacts section on Azure Pipelines: https://dev.azure.com/fernandreu-public/OfficeRibbonXEditor/_build/latest?definitionId=1&branchName=master https://dev.azure.com/fernandreu-public/OfficeRibbonXEditor/_build/latest?definitionId=1&branchName=master The first link takes me to 418 and [Assets] has the EXE and MSI 😊 😊 I did try Azure but I could not see any ‘Artifacts’ section. Is it labeled a something else? (I did say that I’m only learning GitHub) If I click on [Releases] I can see Release 11 (20190928) but I didn’t any way to download an EXE or MSI Sorry to be such a bother Paul H. Hossler Valley Forge, PA <EMAIL_ADDRESS><EMAIL_ADDRESS>From: Fernando Andreu<EMAIL_ADDRESS>Sent: Thursday, November 21, 2019 3:02 AM To: fernandreu/office-ribbonx-editor<EMAIL_ADDRESS>Cc: phossler<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [fernandreu/office-ribbonx-editor] Latest Version? (#78) Hi phossler, There are two types of ways of getting the tool (both mentioned in the main project page): From GitHub's releases page https://github.com/fernandreu/office-ribbonx-editor/releases as you mentioned From Azure Pipelines https://dev.azure.com/fernandreu-public/OfficeRibbonXEditor/_build/latest?definitionId=1&branchName=master The latter is a development build, in the sense that one new version gets automatically generated as soon as I make some changes to the master branch of the project. You usually get more bugs fixed in those, which is why I normally suggest people to download one of those after I have completed an issue they created (in your case, see this https://github.com/fernandreu/office-ribbonx-editor/issues/59 ). The downside of those development builds is that, if I have just made recent changes to the tool, there might also be new bugs introduced that I have not fixed yet. I create releases at a much slower pace than development builds mainly because of this (and also to avoid annoying users with too many Version x.y.z available! messages 😅). The first three digits (1.5.1) are manually changed by myself before pushing the changes to the repository. The fourth version digit (463 in your case) will automatically refer to the build ID. You can see the most recent builds here https://dev.azure.com/fernandreu-public/OfficeRibbonXEditor/_build?definitionId=1&_a=summary . The build ID appears in the URL of each build; 482 https://dev.azure.com/fernandreu-public/OfficeRibbonXEditor/_build/results?buildId=482&view=results is currently the latest. I am just preparing for another release in the next few weeks, so the development build should be quite stable. You can only see builds from the last 30 days (or associated to a Release Pipeline in Azure, which is not the same as a GitHub Release), so build 463 is no longer in that list. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/fernandreu/office-ribbonx-editor/issues/78?email_source=notifications&email_token=AH3OZNYWNTRJBWCWB5BMJR3QUY6ATA5CNFSM4JPZO4P2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEZKFXY#issuecomment-556966623 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AH3OZN66ZDSAAJUUHVGP62TQUY6ATANCNFSM4JPZO4PQ . https://github.com/notifications/beacon/AH3OZNZ2RK7M64FDQ2SO4JTQUY6ATA5CNFSM4JPZO4P2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEZKFXY.gif No problem! The Artifacts section is kind of in the middle once you click on a build, not in the left side. See the bottom-right corner of this screenshot: The Releases section at the left is a different kind of pipeline that I will start using for the next official release on GitHub. It isn't exactly active at the moment (just pointing to a random build). Finally, since you mention you are learning GitHub, and just in case you did not notice: when you reply to an issue via email, everybody can still see the answer in here, including your signature, email, etc., and with everything formatted pretty badly usually. Sometimes even the entire email chain (other times, GitHub manages to filter that out). I have updated your comment to remove some bits just in case you did not want to share them. Ahh - thanks for both Paul
GITHUB_ARCHIVE
Order ambien online overnight reviews 5 stars based on 826 reviews cheap zolpidem singapore Postman distinguishes the Orwellian vision of the future, in which order ambien online overnight totalitarian governments seize individual rights, from that offered by Aldous Huxley in Brave New World, where people medicate themselves into bliss, thereby voluntarily sacrificing their rights. Poland A lithistid demosponge belonging to the family Pleromidae. He appointed Adam, though was quite sceptical about his ability to perform as the department's leader. Treatment by surgical intervention can obviously have the most immediate impact, again however, it is not a order ambien online overnight cure-all. Because stressors are the main cause of order ambien online overnight labile hypertension, common treatment may involve prescription medications such as anti-anxiety tablets to reduce emotional stressors, and otherwise, as well as decrease the risk of labile hypertension. One of the main functional systems of the body affected by opioid use is the endocrine function. Stanshall had Mumbai Buy Alprazolam given her from a vivid dream he'd had while living on the buy ambien otc Searchlight. Miramax for domestic distribution of the film. There are a group known as the Four Heavenly Kings, one for each cardinal direction. Dom Philippe who resided in Goa. Fluminorex is a centrally acting sympathomimetic which is related to other drugs such as aminorex and pemoline. Plasma levels of chloramphenicol must be monitored in neonates and patients with abnormal liver function. Brahms's personal life was also troubled. The patent law was revised in 1844 - patent cost was lowered and importation patents were abolished. Based order ambien online overnight on style analysis, it has been dated to the late 9th or early 10th century. The first three notes of the Alphorn theme create are presented in a swelling crescendo which resolves in a drawn out conclusion over pounding timpani followed by a quiet chord dying in the brass. The kidneys ambien uk buy online contribute to overall homeostasis in the body, including carnitine buying ambien online levels. Biamonti Catalogue, an attempt to catalogue everything that Beethoven wrote in chronological order, though there are buy generic zolpidem 10mg uk works that were not known at the time it was compiled. Best was in town buying drumsticks, so Starr, the Hurricanes' drummer, played drums. Milk and dairy products have the potential for causing serious infection in newborn infants. So a chain order ambien online overnight of anomalies is set up. proof against snowstorm or shutdown. Like the first movement, the third movement opens with an ascending, hesitant, three-note motif that conveys considerable rhythmic ambiguity. H2-antihistamines bind to histamine H2 receptors in the upper gastrointestinal tract, primarily in the stomach. People with narcolepsy may dream even buy zolpidem online usa when they only fall asleep for a few seconds. Clay tile pipe carried the sewage from the flush, sit toilet to the main sewer line running under the street. order ambien online overnight Both include cheap ambien 10mg mastercard four semi-domes, but the two lateral semi-domes are very shallow. Reported side effects related to the accumulation of this metabolite include convulsions, agitation, hallucinations, hyperalgesia, and coma. Oxycodone is also widely available across Western Canada, but methamphetamine and heroin buy ambien mastercard are more serious problems in the larger cities, while oxycodone is more common in rural towns. Perhaps the first most important, aside from the ratification of the 1987 Constitution, would be the administration's pushing for a more open political framework where the administration somehow gave in to the buy real ambien online interests of new economic actors. Possible effects in the central nervous system resemble those associated with delirium, and order ambien online overnight may include: The order ambien online overnight film initially faced no significant reprisals. This does not buy real zolpidem always happen. The order ambien online overnight type species is Talexirhynchia kadishi. Future research is needed to find ways of not only controlling frontal lobe seizures, but of also addressing the specific quality-of-life issues that plague those with frontal lobe epilepsy. Overdoses involving fentanyl have greatly contributed to the havoc caused by the opioid epidemic. ATM owners to either upgrade non-compliant machines or dispose them if they are not upgradable, and purchase new compliant equipment. The vast majority of Dussek's music involves the piano or harp in some way. Ginkgolides are biologically active terpenic lactones present in Ginkgo biloba. The binding between a drug and plasma protein is rarely specific and is usually labile and reversible. She was always a little vulnerable, courageous, but vulnerable. In the case of the phenyltropanes, although there are four chiral carbons, there are only eight possible buy cheap zolpidem 10mg mexico isomers Buy Cheap Xanax 1mg Canada to consider. Chairman Brown lost order ambien online overnight patience with Redknapp due to his demands for further transfer funds. Self-medication is a human behavior in which an individual uses buy generic ambien 10mg a substance or any exogenous influence to self-administer treatment for physical or psychological ailments. Since there is no other order ambien online overnight known clavier four-hand work dated to this time, this work, K. Artisan producers in Europe and New Zealand have offset their higher labour charges for order ambien online overnight saffron harvesting by targeting quality, only offering extremely high-grade saffron. After this, the music changes from A minor to A major as the clarinets take a calmer melody to the background of light order ambien online overnight triplets played by the violins. Blood benzodiazepine concentrations, however, order ambien online overnight do not appear to be related to Cheapest Valium 10mg Online Canada any toxicological effect or predictive of clinical outcome. buy cheap ambien 10mg uk Jackson's commercial appeal and public image declined in the buy ambien online ireland buy american zolpidem wake of the allegations. Owing to its westerly position and proximity to the Atlantic Ocean, Glasgow is one of Scotland's milder areas. TRPV1, which can also be stimulated with heat, protons and physical abrasion, permits cations to pass through the cell membrane when activated. Soon afterward, El Sapo is found dead, and Dexter is called in to investigate. Buprenorphine under the tongue is often used to manage opioid dependence. If the voltage increases past a certain threshold, the sodium current activates other voltage-gated sodium channels transmitting a current along the dendrite. Amy was looking very frail but she was determined to have order ambien online overnight a couple of drinks even if it was against doctors' orders. Producer Dave Cox used Tony Soprano from the television drama The Sopranos as an example Order Tramadol 50mg Online Mastercard in order ambien online overnight regards to Dracula's characterization. Richelle Cooper, an emergency room physician at UCLA Medical Center, testified fifth. Xanax 1mg Buy Online The College was not able to pay a full-time salary, so Stoehr was assisted by at least one refugee aid organization. The following day, David A. However, in later years, Lambesis showed an increasing philosophical skepticism towards Christianity and religion in general. Ives's work is regularly programmed in Europe. This show highlights the various idiosyncrasies of our characters. There is not a single drug that is a standard among treatment. After Jackie's second pregnancy ended in a miscarriage, order ambien online overnight she turned to alcohol. Hinduism is the most widely professed faith in India, Nepal and Mauritius. The acts define the penalties for unlawful production, possession and supply of drugs. Eccles could understand Little Jim's speech, even Little Jim himself having no idea. The outer casing of pacemakers is so designed that it will rarely be rejected by the body's immune system. Welch also announced that he had re-dubbed his album Save Me from Myself, after his autobiography of the same name. Everyone who lives on Wisteria Lane knows who Mitzi is and they often stay away from her due to her bad temper and selfish attitude. Much minimalist and totalist cheap ambien online europe music makes extensive use of polyrhythms. Kirk has repeatedly raised concerns of American businesses that China is not properly enforcing intellectual property rights of order ambien online overnight American companies doing business there. These 'dance-like' movements of chorea often occur with athetosis, which adds twisting and writhing movements. buy walmart zolpidem Attitudes towards hallucinogens other than cannabis have been slower to change. Women starting an estrogen-containing oral contraceptive may need to increase the dosage of lamotrigine to maintain its level of efficacy. I've made sure I've always played a order ambien online overnight part in their lives. Bach's dealing with ornamentation can also be seen in a keyboard arrangement he made of Marcello's Oboe Concerto: The zebrafish, like the axolotl, has played a order ambien online overnight key role as a bridge organism between invertebrates buy cheap zolpidem online usa and mammals. CX-546 is an ampakine drug developed by Cortex Pharmaceuticals. Long-term exposure to levels in this range can be linked to an increased risk of order ambien online overnight cancer and, at levels above this range, there can also be a risk of respiratory illness. As such, they are only generally associated with such effects in women and often only at Buy Xanax London high doses. It is made into a candy called shoga no sato zuke. His first target is order ambien online overnight Derek, who he kills by ramming Susan's curling iron through his mouth. The opponents and supporters of chloroform were buy cheap zolpidem 10mg visa mainly at odds with the question can you buy ambien online of whether order ambien online overnight the complications were solely due to respiratory disturbance or whether chloroform had a specific effect on the heart. The diagnosis depends on two factors, namely chronicity and order ambien 10mg visa reversibility. pulse oximetry, capnography, peripheral nerve stimulation, noninvasive blood pressure monitoring, etc. His powers were passed on to his daughter, and to Clara after he dies of natural causes. An Egyptian study. order ambien online overnight Buy Cheap Ambien Online Usa
OPCFW_CODE
Godot 4 has just been released! I I cant believe it even exists its so amazing. The scripting language is similar to python, but totally specialised for making games. Code can be edited LIVE and the running game updates. A beautiful fully featured IDE. The node system is very intuitive. 2D and 3D. A full build of a project and deploy to the Quest takes 20 seconds. I am in love with Godot. . Its the best thing since MSX Basic. It is the future of game development and will be used not only by indies but also in schools, colleges to teaching coding. It’s 100% free, open source, cross, platform, fully featured, versatile and light-weight game engine. If you are interested in checking out my Godot experiments this is my repo with lots of examples of Boids, transforms, timers etc etc. Recently, I posted a link to an article discussing the best way to give an ice bath to a baby on my Facebook page. Many people got upset and offended. People berated me for promoting the practice. People told me I should take the page down as a baby might get killed. The truth is – I have no idea if giving an ice bath to a new born baby is a good idea or not. I doubt it! That wasn’t the point of the article. The page in question was generated by AI. I used chatgpt3 to generate the articles from prompts like “Write an article for a women’s magazine about whether to lower babies into ice baths by the head or by the feet” and “Write testimonials from parents about the benefits of giving daily ice baths to babies”. The resulting articles were creepy, hilarious and brilliant. I used Stable Diffusion to generate various pictures of smiling babies in baths of ice and put the whole lot into a Facebook page. I shared the article and it was interesting to observe how many people accepted it as real without noticing that all the babies have extra fingers and limbs, and the text doesn’t make sense in some of the articles. I was inspired to create Ice Baths for Babies by these cool projects: The Leeds 13 In 1998 a group of art students faked a trip to Malaga and leaked to the press that they had spent their grant money on the holiday. The group kept the pretence up for a week and revealed the truth a week later on a radio program they were invited onto to answer accusations from a hostile panel of commentators. Here is a great Vice documentary about the project: A Modest Proposal “A Modest Proposal For preventing the Children of Poor People From being a Burthen to Their Parents or Country, and For making them Beneficial to the Publick is a satirical essay published anonymously by Jonathan Swift in 1729. The essay suggests that impoverished Irish families might ease their economic troubles by selling their children as food to the rich. The essay includes various suggestions for ways that children might be cooked. For example: “A young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout”. The fact is – there are amazing things happening in the field of Computer Science right now. chatgpt3 and Stable Diffusion are statistical models trained on billions of examples of text, question and answers, images and videos. The are the non-conscious intelligences, that Yuval Noah Harari wrote about in his book Homo Deus. The are not people or animals. The do not have a soul, nor are they alive. But they are not search engines either. They are something totally different and nobody – not even the computer scientists knows quite what they are and what to do with them yet. I am thinking or all the ways this technology will make tedious tasks like writing exam papers, grant applications and code examples less time consuming and will improve my productivity. They amazing tools for creative thinking, though of course they make creating and spreading fake news really easy too which is all the more reason for people to question. Hopefully this project got you thinking about Computer Science, Ethics, Fake News, Cognitive Biases Art, etc. etc. As Oscar Wilde wrote: “There is no such thing as a moral or an immoral book. Books are well written or badly written. That is all.” I recently read a fascinating thesis about nematodes. Nematodes are the most abundant multicellular organisms on the planet and there are around 40 quintillion of them. Inspired by the humble nematode, I wrote two programming lab tests. This one is for fourth year Games Engines students and it uses C# and Unity to create a simulation of a school of nematodes swimming and wriggling and avoiding each other: I recently was interviewed as part of an ITMA project in the Cobblestone pub by the renound Tom Mulligan. In the interview, I play a few sets of tunes, talk about Tunepal, Wim Hof breathing and various other things.
OPCFW_CODE
On-the-fly Calculations: Pros and Cons The term “on-the-fly calculation” refers to all the calculations executed directly at the layout level, meaning that the result is not physically stored in a Cube. This directly affects drill-down functionality and has advantages in terms of reducing the overall size. During the development phase the developer chooses to leverage some of the on-the-fly calculations or create an ad hoc structure in the data model and complex dataflows to achieve the same result. On-the-fly calculations are an alternative to dataflows with a fundamental difference in how data are calculated and stored in the application. On-the-fly calculations calculate “on-the-fly,” which means that the calculation is executed directly at the layout level. The result is visible in the application without being stored in a Cube. On the other side, dataflows work and store the result on a Cube, so data are physically stored in the data model. The direct implication is how the data, which can be the result of a data flow or an on-the-fly calculation, can be analyzed, sliced, drilled, and used in other parts of the application. On-the-fly calculations can be executed leveraging Aggregation Functions, Algorithms, and Rules. - The aggregation functions are predefined formulas that can be recalled from a layout and applied to a data block. - The algorithm (also called column algorithm) is a data block whose values are calculated with a formula based on other data blocks. - The rule is a set of formulas defined by members of the same Entity. A rule is always associated with a single Entity and can be used with (or applied to) all Cubes with this Entity as a dimension in their structure 3.1 Aggregation Functions Aggregation functions are predefined formulas that can be recalled from a layout and applied to a data block and allow a calculation result to show directly in a report. This avoids creating specific dataflow procedures and dedicated Cubes to achieve the same result. 3.1.1 Why and when to use an AGGREGATION FUNCTION instead of a dataflow? The aggregation functions are predefined formulas that can be recalled from a layout and applied to a data block and allow a calculation result to show directly in a report. This avoids creating specific procedure dataflows and dedicated Cubes to achieve the same result. The suggestion is always to choose the aggregated functions and use the dataflow only if the results need to be in a Cube or because it’s used as a source for other calculations. The aggregation functions are configured in the front-end layer, so they only require a Power User license. 3.1.2 Drill-down functionality for aggregation functions Aggregation functions also support drill-down functionality, as you’d expect. When you drill down using the Entity that is the aggregation function’s driver, the result retrieved is from the raw Cube data. See the example below: Layout set up: - ACT Quantity - Aggregation Function: Distinct count on Item - Aggregation Function: Distinct count on Customer Drill down by item: - The column Distinct count on Item returns the ACT quantity row data - The column Distinct count on Customer returns the number of distinct customer by Item The algorithm is a data block whose values are calculated with a formula based on other data blocks. They also don’t store data in the database and won’t increase the Cubes number and, consequently, the database size. This helps improve program efficiency 3.2.1 Why and when to use an ALGORITHM instead of a dataflow? The main difference between the algorithm and the dataflow is the way the calculation is executed, in particular: - The algorithm performs the calculation at the level of the dimensions configured in the dataview that could be more aggregated than the granular level of the Cube used. For example, a Cube’s dimensions by product, customer, and month can be displayed in the dataview aggregated by month and product category. Whatever formulas are configured in the column algorithm will be executed at the month and product category level. - The dataflow always executes the calculation at the granular level of the Cubes involved in the calculation. The result can then be displayed aggregated at any level that makes sense, based on data structure and hierarchy. For this reason, it becomes clear that some calculations do not make sense if executed at an aggregated level. See the example below: Example: To calculate the Total Sales as Price * Quantity (both detailed by Item). Case 1 (Red) : Sales amount calculated by the algorithm is correct only if the layout is configured with the item’s granular information (Figure 2). As soon as the item dimension is removed and the layout aggregated by item category, the price is aggregated, and the sales amount calculation is wrong (Figure 3). Case 2 (Green): Sales amount calculated by dataflow is executed at the lowest level of detail of the Cubes involved (i.e., the Item) and then aggregated in the dataview; hence the calculated sum is correct (Figure 3). This is the primary driver to decide. Then, if the calculation is adequate, and the result doesn’t need to be stored in a Cube, it’s always preferable to use column algorithms. They’re executed at the layout level every time the Screen is refreshed; they don’t require the user to run a procedure or the developer to create all the data model structures needed. As for what concerns the aggregation functions, the algorithm is managed in the front end, and it only requires a Power User license to be maintained. 3.2.2 Drill-down functionality for algorithms In the case of algorithms, the drill-down functionality works as expected, allowing analysis at a deeper level of detail than the one shown in the dataview. Example: Report with the variation between sales of current and previous months. This report is aggregated at Item Category. We need to analyze the details by item. By drilling down on the ITEM_CAT01 by Item, the algorithm correctly calculates the Variation Percentage (Figure 4) Rules are sets of formulas defined by members of the same Entity. A rule is always associated with a single Entity and can be applied to any data block with that entity as a dimension in its structure. 3.3.1 Why and when using a RULE instead of a dataflow? It’s always recommended that you use rules when the target and the factors used for the calculation are part of the same Entity. To execute the same through a dataflow, combining different functionality such as selection and referring to and applying them in a specific sequence creates multiple dataflow steps. Example: the Gross Profit is calculated as the Revenues minus the Cost of Goods Sold. These three measures are included inside the same entity P&L Account, and a rule is created based on this Entity. Two Cubes share the P&L Account dimension: the Actual and the Budget. The P&L rule can be applied to both the Cubes, and the Gross Profit is then automatically calculated without creating additional data-model structures and related dataflows steps. 3.3.2 Drill-down functionality for rules The drill down can’t be applied to cells calculated through a rule. The drill down works as expected for all the other lines that are not the result of a rule. To allow the drill down on these cases, it’s necessary to consolidate the rule in a dataflow procedure. In this way, the rule is applied during the execution of the dataflow, and the result is physically stored, and the drill down applicable. The consolidation through dataflow requires high resource consumption, which can lead to low performance. It’s advised to consider the rule consolidation only in the case described above or when the result is needed as a source for other calculations. In all the other cases, it is sufficient to apply the rule directly in the dataview. Except for the cases described above, where there is no alternative to the dataflow, it is always advisable to use on-the-fly calculations: - They are automatically calculated and refreshed at the layout level. - There is no need for the user to press any buttons to run procedures to refresh or trigger the calculation. - There is no need for the developer to create ad hoc structures and complex dataflow to achieve the same result. This is also affecting the data model size. - They are predefined formulas, so they can easily be configured, managed, and maintained. - All Categories - 1.8K Forums - 1.7K Platform - 99 Academy - 268 Resources - Board Manual - 49 Best Practices - 41 How-To Guides - 25 Board Advocacy Program - 153 Blog - Groups Hub - 4 About Groups - New Community Members - 2 Board Academy - 7 ILT/VILT Course Catalogue - 13 e-Learning Course Catalogue - 4 Academy Forum - 1.1K Idea Exchange - 270 Partner Hub - 1 Solution Sales Toolkits - 91 Support - 14 FAQ's - Customer Support Portal - 54 Support Articles
OPCFW_CODE
import logging from danger_zone.result_serialization.csv_reporter import CSVReporter from danger_zone.simulation import Simulation class Experiment: """Class representing an experiment, consisting of multiple simulation runs.""" def __init__(self, args): """ Constructs an instance of this class. :param args: The parsed commandline arguments passed to this program. """ self.args = args self.num_iterations = args.num_iterations self.csv_reporter = CSVReporter(args) def run(self): """Runs the experiment.""" for iteration in range(self.num_iterations): logging.info("Running simulation iteration {}/{}".format(iteration + 1, self.num_iterations)) simulation = Simulation(self.args, iteration, self.csv_reporter) simulation.run() self.csv_reporter.close()
STACK_EDU
SQL Server can be hosted entirely in Microsoft Azure, either in a hosted virtual machine (VM) or as a hosted service. Hosting a virtual machine in Azure is known as infrastructure as a service (IaaS), and hosting a service in Azure is known as platform as a service (PaaS). Microsoft’s hosted version of SQL Server is known as Azure SQL Database or just SQL Database that is optimized for software as a service (SaaS) app development. SQL Server also can be deployed in a hybrid cloud scenario, extending your on-premises SQL Server environment to utilize various features of the Azure platform: SQL Server backup to URL: You can back up your database directly to Azure blob storage or back it up to an on-premises file store and then copy it to Azure blob storage. Using this option can save precious storage space on expensive local storage. SQL Server data files in Azure: You can use Azure blob storage for database files for an on-premises instance of SQL Server. Although this option primarily is used with Azure virtual machines, it has its place when developing and testing functionality in some scenarios. Stretch SQL Server table to Azure SQL Database: You can stretch an on-premises table to store cold and warm data (older data) in Azure SQL Database while hot or current data (more recent data) remains in the on-premises table—with all of the data being available to query. This option became available in SQL Server 2016 and is a great way to archive infrequently accessed data off your local storage. This option can save money and increase performance for online transactional processing (OLTP) operations on hot data, while the data remains available for analytical queries. Transactional replication to Azure SQL Database: You can use transactional replication to replicate data from an on-premises or IaaS SQL Server database to Azure SQL Database. This option is useful to replicate data close to different groups of users to improve query performance and as a prelude to migrating to Azure, enabling you to minimize downtime during migration. AlwaysOn Availability Group replica in IaaS: You can configure SQL Server in IaaS as an asynchronous replica of an AlwaysOn Availability Group. This option provides you with a low-cost disaster recovery scenario and can be used as a prelude to migrating to Azure, allowing you to minimize downtime. Finally, Microsoft offers an additional PaaS service using SQL Server for data warehouse solutions, called Azure SQL Data Warehouse. Azure SQL Data Warehouse is an enterprise-class, distributed database capable of processing massive volumes of relational and non-relational data. Source of Information : Migrating SQL Server Databases to Azure One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file... On today’s Internet, IPv4 has the following disadvantages: • Limited address space. The most visible and urgent problem with using IPv4 on ... The following are the advantages of WAP: ● Implementation near to the Internet model; ● Most modern mobile telephone devices support WAP; ... Many of the virus, adware, security, and crash problems with Windows occu when someone installs a driver of dubious origin. The driver suppo...
OPCFW_CODE
Last modified: 2012-10-03 16:42:51 UTC Branching off from bug 35497, which is about setting up an easy-as-possible pull-request link to Gerrit once the mirror is set up (Cite bug 35497 comment #13) > So, some ideas: > * Don't use github.com/mediawiki/core as name > - means we duplicate user groups/rights with the org "Wikimania" at github > - means we're going to promote ourselves as "core". e.g. > - any advantages? > hashar mentioned we want to be able to allow volunteers to help out with the > handling of pull-request and that to grant them rights we'd want to have it > outside the @wikimedia organization. However this is not needed because: > * github allows collaboration without any rights at all. You can leave > inline comments, and stuff on any pull request anywhere > * as in comment 12, they are pull-able in many different formats including > plain "git pull" so anyone can pull it locally, and if they have a labs ldap > account they can also push straight to gerrit for review. > * github supports auto-closing of pull-requests when the commit hash is > pushed into the repo, so no maintenance there either. And if the commit has to > be amended, including "fixes GH-123" or "closes GH-123" will also close the > relevant PR as soon as it is merged. And if all else fails, a repo collab in > @wikimedia (of which there are many) can just push "Close" manually on > So volunteers have complete access without needing to be manually added to > anything, this is what made GitHub works. And if its only about closing some > exceptional ones, then I'm sure we'll manage that. > * Use github.com/wikimedia/mediawiki-core > - ideally we'd have some kind of auto-push from gerrit or jenkins, otherwise > just set up a 30 minute cron somewhere to `pull gerrit -f` and `push github > - admin settings: pull-request: true, issues: false, wiki: false > https://github.com/wikimedia How is this not a duplicate of bug 35429? I'll be doing all the repos at once when I setup replication, not one-by-one. (In reply to comment #2) > How is this not a duplicate of bug 35429? I'll be doing all the repos at once > when I setup replication, not one-by-one. I didn't say it wasn't a dupe. But since it appears there is no clarity yet on how the gerrit or jenkins replication is going, I figured we may be able to set up at least something on the short term for mediawiki-core so that its out there and we can start experimenting with how to solve other issues such as what and if we need a tool for pulling in pull-requests (or that git pull github.com...; git push gerrit HEAD:refs/for/master/gh-123/some-feature; is sufficient). I mean.. I could just ask for the repo and set a cron up to mirror and get it going. Seems low hanging fruit. But if bug 35429 can be done within say 2 weeks, then by all means dupe it. Github replication for core is now in place: https://github.com/mediawiki/core
OPCFW_CODE
Find orders placed on last day of any month using sargable query? I wrote this query to find orders placed on last day of any month. I know this approach is not recommended if orderdate is indexed. What approach should i use to make it sargable? select o.orderid, o.orderdate, o.custid, o.empid from sales.Orders o where day(o.orderdate) in (30, 31) or (month(o.orderdate) = 02 and day(o.orderdate)= 28) or (month(o.orderdate) = 02 and day(o.orderdate)= 29); You can do this with computed columns: alter table Orders add column nextdayofmonth as day(dateadd(day, 1, orderdate)); create index orders_nextdayofmonth on orders(orders_nextdayofmonth); The nextdayofmonth is for the next day, so leap years can easily be handled. After all, the day after the "last day" is the "first day" of the next month. Then phrase your query as: where next_dayofmonth = 1 This expression is sargable. DATEADD is sargable: WHERE DATEADD(day, DATEDIFF(day, 0, o.orderdate), 0) = DATEADD(day, -1, DATEADD(month, DATEDIFF(month, 0, o.orderdate) + 1, 0)) The first is just the old way to truncate the time from a datetime. The second adds one month, "truncates" the month and subtracts a day. Here's a fiddle that returns the last day of the current month with the same "trick". Can you explain this part DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, o.orderdate) + 1, 0)) I'm not getting how +1 works here. This should work fine select o.orderid, o.orderdate, o.custid, o.empid from sales.Orders o where day(o.orderdate)=day(DATEADD(ms, -3, DATEADD(mm, DATEDIFF(m, 0,o.orderdate) + 1, 0))) The below query gives the last day of current month, replace getdate() with the date variable as shown above: SELECT day(DATEADD(ms, -3, DATEADD(mm, DATEDIFF(m, 0, GETDATE()) + 1, 0))) @t-clausen.dk Is "=" not SARGable? @Punter015 Putting columns in functions within your WHERE clause is not SARGable. This query would fail for leap years as it would give you 2 dates as the last day of the month in February. To make it SARGable, you would need to take out the functions on the date column. You can also do is: select TOP 1 orderid, orderdate, custid, empid from sales.Orders ORDER BY orderdate DESC Get all dates that DAY part of its tomorrow is 1: SELECT * FROM Sales.Orders WHERE DAY(DATEADD(dd, 1, orderdate)) = 1
STACK_EXCHANGE
How to wait for completion of all asynchronous tasks before running final tasks I must be misunderstanding dispatch_group because my dispatch_group_notify call is running before the end of the async calls made within individual dispatch_group_async blocks. Here's my code: dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_group_t dispatchGroup = dispatch_group_create(); // create operation for each HKTypeIdentifier for which we want to retrieve information for( NSString *hkType in typesToRetrieve){ dispatch_group_async(dispatchGroup, queue, ^{ // this method runs several HK queries each with a completion block as indicated below [self getDataForHKQuantity: hkType withCompletion:^(NSArray *results) { // this completion blocks runs asynchronously as HK query completion block // I want to runCompletionBlock only after // all these processResultsArray calls have finished [self processResultsArray:results]; }]; }); } dispatch_group_notify(dispatchGroup, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [self runCompletionCheck]; }); The method getDataForHKQuantity in turn runs an asynchronous query to HealthKit with a completion block. I need to run runCompletionCheck after all these completion blocks for the HealthKit queries have run, but what is happening now is that runCompletionCheck is running before the code in the queries' completion blocks has run. To me that means that dispatch_group_notify along with dispatch_group_async don't work the way I need, so what am I doing wrong or what's the best way to handle this? Overall goal: make a bunch of concurrent queries to HealthKit, run their completion blocks, then when all those completion blocks run, run a final method. The problem is two fold. First, the health kit queries don't always run their completion blocks. I started by using a counter system, with a counter in the health kit queries' completion blocks. That's what told me that these completion blocks don't always run. Second, I don't know how many queries I am trying to run because it depends on what data sources the user has. So, question, how can I wait until all the completion blocks from a series of health kit queries have run before running a final method? Your -getDataForHKQuantity:withCompletion: method is asynchronous. So, through your dispatch groups you are syncing the calls to these methods, but not the work done in the methods themselves. In other words, you are nesting two asynchronous calls, but syncing only the first level through you dispatch groups. You'll need to come up with a different strategy for controlling your program flow. Two examples: 1. Using Semaphores (blocking) Some time ago, I used semaphores for a similar task, not sure it's the best strategy, but in your case it would go sth like: semaphore = dispatch_semaphore_create(0); for( NSString *hkType in typesToRetrieve) { [self getDataForHKQuantity: hkType withCompletion:^(NSArray *results) { // register running method here [self processResultsArray:results]; if (isLastMethod) // need to keep track of concurrent methods running { dispatch_semaphore_signal(semaphore); } }]; } // your program will wait here until all calls to getDataForHKQuantity complete // so you could run the whole thing in a background thread and wait for it to finish dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); 2. Using dispatch_group dispatch_group_t serviceGroup = dispatch_group_create(); for( NSString *hkType in typesToRetrieve) { dispatch_group_enter(serviceGroup); [self getDataForHKQuantity: hkType withCompletion:^(NSArray *results) { [self processResultsArray:results]; dispatch_group_leave(serviceGroup); }]; } dispatch_group_notify(serviceGroup,dispatch_get_main_queue(),^{ // Won't get here until everything has finished }); Also check this link for further info. I wouldn't use semaphores, as they block and you may be on the main thread. You still may use dispatch_group but you have to use dispatch_group_enter() and dispatch_group_leave(). You have to enter before the async call, then leave in the competion block.
STACK_EXCHANGE
What Is Localized Testing - Introduction Localization testing is the process of testing a product's localization quality for a specific target region. This testing involves installing the localized software on a localized operating system and testing the software's functionality, installation/uninstallation, and hardware/software compatibility specific to the local region. During localization testing, the testing work focuses on aspects affected by localization, such as UI and content, and region-specific, language-specific, and region-specific aspects. Basic functionality testing, installation and upgrade testing run in the localized environment, and application and hardware compatibility testing are also included. The items to be checked during the localization testing of the user interface and language include verifying all application resources, accuracy and resource attributes of the language, layout errors, consistency checks for written documentation, online help, messages, interface resources, command key sequences, etc., and checking compliance with system, input, and display environment standards. Purpose of Localization Testing The purpose of software localization testing is to ensure that the localized software has the same functionality and performance as the source language software. It also ensures that the localized software conforms to the language, culture, and traditional concepts of local users. Testing Strategy for Localization Testing Localized software must be installed and tested on various localized operating systems. The source language software is installed on another identical source language operating system as a comparison test. The focus is on testing software functionality and interface errors caused by localization. The translation quality of localized software is also tested using a combination of manual testing and automated testing. Main Contents of Software Localization Testing The testing content is determined by different testing phases. For example, in the first build, software interface testing is the main focus. In the middle builds, the focus is on functionality and interface, while in the final build, installation/uninstallation, software help and main functions are tested. Testing is done to determine if localized software can be correctly installed/uninstalled on a local language operating system, including whether it supports local language installation directory names. Changes in installation files, shortcuts, program icons, and registry entries before and after installation/uninstallation should be consistent with source language programs. Localized software functionality must be the same as that of source language software. It must also support local language input and output, such as support for double-byte characters and correct display. Support for local dates, times, currency symbols, file names, and directory names is also tested. The layout and aesthetics of buttons and menus in the software installation window should be reasonable and attractive. After the software is running, the layout and localization of interface elements, including menus, shortcuts, dialogs, screen prompts, buttons, and list boxes, as well as font and font size localization, should be correct. The translation of interface text should be consistent with the terminology table, and there should be no untranslated elements. The functionality of localized help files should be the same as that of source language software. The layout of localized help files should be reasonable and attractive. The translation of help file text should be accurate, and professional, and there should be no untranslated paragraphs. Tools Recommend: LQA by WeTest What is localized testing? If you're looking for a reliable solution for localization testing, we recommend trying out WeTest's LQA service. WeTest offers multilingual coverage in 32+ languages across Europe, America, Japan, South Korea, Southeast Asia, the Middle East, and other regions. Plus, they can quickly expand their language support based on your project needs. WeTest prioritizes cost reduction and resource management to improve your product's quality throughout the testing process. And it’s efficient and high-quality service includes quick setup of a support team, coordination and response to urgent needs, parallel labor division across multiple teams, and real-time feedback for transparent and controllable progress.
OPCFW_CODE
Light intensity control Control over the lighting of the models/scenes is currently limited to the --light-intensity option from the command line. It would be nice to be able to have additional ways to change the intensity more interactively and/or more automatically. I have made a proof-of-concept exploring 2 ways of achieving that: Manual increase and decrease intensity with L and shift+L keys. Pretty straight forward and self-explanatory. Guesstimation of an "optimal" intensity for the given view when pressing ctrl+L. This is basically an auto-exposure feature that will run a few render passes at different intensities to try and optimize the result. This could be exposed as an option to be performed automatically after the model is loaded (could be good for thumbnail generation?). This is implemented in the following branch: snoyer:light-intensity-control if anyone is willing to give it a try and discuss if/how it could be properly integrated. That is great ! I will take a look this weekend. Could you create a WIP PR ? That would make it easier. BTW did you consider adding a --light-intensity-optimal ? This would be a great feature indeed. However, I'd like to try modifying the exposure coefficient of the tone mapping pass instead of playing with the light intensity. Right now, the tone mapping pass is handled here: https://github.com/f3d-app/f3d/blob/52e51527e48d6d75d5e875f829368caf1f6316a4/library/VTKExtensions/Rendering/vtkF3DRenderer.cxx#L173 There is an exposure coefficient unmodified by f3d (default is 1). Can you try to create a new option render.effect.tone-mapping.exposure, use this option to call SetExposure, and modify this with your keybinds? I suspect you will get better image quality with this method. I'm also concerned about the iterative method doing a render and traversing all the pixels every iteration, it might be too expensive. Thanks for the feedback guys, definitely a lot of loose ends right now which is why I opened an issue rather than a PR. Adding --light-intensity-optimal definitelly make sense but I wonder if the results varies a lot with the default scene, what kind of results did you get ? The results do vary depending on the model, there's a lot of margin for improvement on the actual metric use to evaluate what is good lighting or not. I'll try and get a few examples of what I'm able to get with the few models I played with. There is an exposure coefficient unmodified by f3d (default is 1). Can you try to create a new option render.effect.tone-mapping.exposure, use this option to call SetExposure, and modify this with your keybinds? I suspect you will get better image quality with this method. I'll look into that. Also does this mean that the whole --light-intensity thing was misguided and we should have used tone mapping instead? I'm also concerned about the iterative method doing a render and traversing all the pixels every iteration, it might be too expensive. Is there any way around that? It's probably doable more efficiently if we can access rendering buffers more directly, or maybe render to a smaller offscreen buffer (which would statistically give the same results with way fewer pixels to go through) but how else can we evaluate the rendering without getting a good look at the pixels somehow? I'll look into that. Also does this mean that the whole --light-intensity thing was misguided and we should have used tone mapping instead? I think light-intensity is still needed, but configuring the exposure coefficient would be great too. Both are not incompatible. Is there any way around that? It's probably doable more efficiently if we can access rendering buffers more directly, or maybe render to a smaller offscreen buffer (which would statistically give the same results with way fewer pixels to go through) but how else can we evaluate the rendering without getting a good look at the pixels somehow? There are many suggestions here: https://knarkowicz.wordpress.com/2016/01/09/automatic-exposure/ However, I think it should be done directly on the GPU in VTK (in vtkToneMappingPass) That being said, I don't really like automatic things in F3D which is designed to be manually configured by the user. If we have key bindings to increase/decrease the exposure, I think it's good enough. That being said, I don't really like automatic things in F3D which is designed to be manually configured by the user. If we have key bindings to increase/decrease the exposure, I think it's good enough. Well that solves most of it then, doesn't it :) Implementing the code for these specific bindings shouldn't be a big deal; but should any bindings be added before a solution to #443 is decided on? Lets not wait for #443, I think this will be come aftern the next release as we are focused on other subject in F3D for now. Lets not wait for #443, I think this will be come aftern the next release as we are focused on other subject in F3D for now. Anyway, here's some examples of the lighting optimization I don't have to finish anymore now: Key takeaways: light intensity and exposure are pretty much the same except when stuff is shiny? finding a good metric would be hard and subjective @Meakk is right, let's not even go there, manual controls will do @snoyer this is fixed, right ? @snoyer this is fixed, right ? Yes, manual controls were added in #504 and automatic adjustment isn't really appropriate/necessary.
GITHUB_ARCHIVE
Linux has these nice little processes called LWP (Light Weight Process) or otherwise known as threads. Generally these are spawned by 1 master process that will show up in your normal ps output. # ps -elf | wc -l 145 So does this mean your system only has 145 processes running? No. If you run ps with a -T you will see all of the threads as well. # ps -elfT | wc -l 275 As you can see the process count jumped significantly due to threads. Normally this is never any type of problem, However sometimes a process (usually java based) will have a thread leak. This can cause your system to run into system limits. Specifically one limit I have seen systems hit is the # sysctl -a | grep kernel.pid_max kernel.pid_max = 32768 As you can see on my system the pid_max is 32768, this means the system can only give out 32768 PID's at one time (it will rollover if pid #'s are available). The reason threads come into play here is because each thread has a PID, and SPID number. The SPID's also take from the pid_max number. To see the number of threads one specific process is using you can do the following. # ps -p 2089 -lfT F S UID PID SPID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 0 S bcane 2089 2089 1 0 80 0 - 29457 poll_s 09:17 ? 00:00:09 gnome-terminal 1 S bcane 2089 2091 1 0 80 0 - 29457 poll_s 09:17 ? 00:00:00 gnome-terminal 1 S bcane 2089 2094 1 0 80 0 - 29457 pipe_w 09:17 ? 00:00:00 gnome-terminal The lesson for today, is watch your threads! Recently Benjamin published his first book; Red Hat Enterprise Linux Troubleshooting Guide. In addition to writing, he has several Open Source projects focused on making Ops easier. These projects include Automatron, a project enabling auto-healing infrastructure for the masses. Identify, capture and resolve common issues faced by Red Hat Enterprise Linux administrators using best practices and advanced troubleshooting techniques What people are saying: Excellent, excellent resource for practical guidance on how to troubleshoot a wide variety of problems on Red Hat Linux. I particularly enjoyed how the author made sure to provide solid background and practical examples. I have a lot of experience on Red Hat but still came away with some great practical tools to add to my toolkit. - Amazon Review
OPCFW_CODE
In the iOS examples there is small app called GameBuzzer. It’s a simple app that plays a different sound file depending on which of the two buttons you press. However, if you press either the buttons rapidly at least 250+ times the app will crash. There seems to be a major issue with the way Xojo is handling sounds with iOSsound. What other alternatives do we have to play sound files in rapid succession (think drum machine)? The demo project seems to be leaking memory, but very slowly. I clicked much more than 400 times on a button and activity monitor shows me about 80 MB memory usage. Long way until it will run out of memory. Do you have a crash log you could share? And if it proves to be a memory leak, this would be a case for a feedback report. So why does the forum indicate there have been two or more replies to this since the one Ulrich posted, yet they don’t appear? Thats for you too, Dave? Yes, something strange here. If Gavin had deleted his replies this should be shown at least HI Ulrich, thanks for responding. I do have a crash log for GameBuzzer (see below). I am working on a similar app and keep encountering crashes when a sound is played continuously. I tested GameBuzzer to see if would do the same, which it does. In my app I have put the sound playing function into a thread. The app no longer crashes, however, the sounds stop playing after a certain number of iterations. There is a counter to track the number of times the touch is invoked. It still increments after the sounds have stopped so I know the touch events are triggering. I have found that that stopping the sound prior to playing it will prevent the crash, however, that introduces an unwanted click sound. I had a bit of trouble posting that last message. Browser got caught up in a loop. Might explain the unseen massages. Funny: I write here and your reply appeared, although I received it a while ago as mail notification. Anyway: Did you close the view before the crash appeared? Looks to me like the delegate sends a touch notification to a view that doesnt exist anymore. Although that shouldn’t occur, the delegate should become Nil when the view is closed. But maybe Im wrong. Be it as it may: When you send the sound into endless repeats, the app crashes quite fast when its memory consumption goes over the device limits. I made an instruments leak check and will file a bug report. EDIT: Yes, from time to time the forum has hickups with certain threads. Heres the report: <https://xojo.com/issue/42106> Thanks Ulrich for looking into this! I appreciate your help. I’m guessing the sound issue (memory/buffer overrun) causes the View to close, and the delegate sending a touch notification to a closed View is causing the actual crash. Just a thought. In my app, although no crash occurs now, when the sounds stop working the background images in the app disappear too. I tried it with an AVAudioPlayer from iOSLib. Besides the minor problem that no sound can be heard (must have made a mistake somewhere), its the same: When I play the sound endlessly repeated, memory usage goes up fast and after some time I have a “silent crash” the program exits as if it were quit normally. Nothing in console. Im a bit clueless right now. I’ve run into this and found that if you have a sound file in your app and you repeatedly call .play directly from the sound file, that’s the cause of the memory leak. If you instead save an iOSSound object somewhere in memory and assign the sound file to it, and then call that sound’s .play, the crashing no longer occurs. The problem here is you can’t play the same sound multiple times simultaneously. So you’ll have to check if it’s playing, and then decide what to do. A secondary problem is that if you call iOSSound.Stop, it apparently takes a bit of processing power because it very briefly stalls the app. Wouldn’t matter much unless you’re making a game (which is usually why people use repeated sound effects like this…)
OPCFW_CODE
Code with Kristian • I make videos and write about software development and programming tools Our newest resource for learning CSS is here Last week, I shared what I've been working on recently — an up-to-date "codethrough" of freeCodeCamp's curriculum: walking through every exercise, lesson, and challenge, and adding my own spin on it where possible. I also shared the first video in the series, covering the basics of HTML, including tags, attributes, and basic input handling. Today, I'm releasing episode two: almost ninety minutes covering the basics of CSS, covering freeCodeCamp's "Basic CSS" module in their Responsive Web Design certification. There's a ton to dig into here, so much so that when I tried to paste the Table of Contents for the video into this newsletter, it broke my editor 💀 Instead, here's a quick glance at the important stuff you'll learn: - Using CSS classes and IDs to target HTML elements - Importing and working with custom Google Fonts - How to use padding and margin to space and position elements - The clockwise notation as a way to always remember how to correctly set padding and margin, in order — this was new to me! 🤯 - CSS colors, and how to set them using hex codes and the RGB format - Defining and using CSS variables for more re-usable CSS code - How to use media queries to style your HTML based on screen size, width, and more That's just a quick glance — there's about 45 exercises and lessons in total — and whether you're new to CSS, scared to jump in, or an expert who could use some quick exercises as a little morale boost, you should check out the video and let me know what you learn! Last week, I also announced Bytesized Resources, a new site where I'm collating all the resources, cheat sheets, and study guides for Bytesized's videos and courses. Since then, it's become the second most popular page on my website, which is a little surprising (I haven't promoted it much). So thanks for checking it out, if you have, and if you haven't yet, here's a great reason to: a fresh 1k word study guide, including practice questions and key concepts to dig into, to accompany today's Basic CSS video. As a member of the newsletter, you already have access—just visit the page below. I'm going to be writing more about CSS this month, so if you're interested in learning it, especially things like CSS Grid and Flexbox, keep an eye out 👀 If you're enjoying the freeCodeCamp curriculum videos and haven't subscribed to my YouTube, check out the channel here.
OPCFW_CODE
Sorting on the RMB column generates an error On the PDB page https://dev.pdb-dev.org/chaise/recordset/#99/PDB:entry@sort(RCT::desc::,RID) , when I click on the RMB column, a popup windows shows up with the following error: 409 Conflict The request conflicts with the state of the server. Requested column 2 does not exist in table entry. Can you try to run this with developer mode and see whether there is any useful hint in the console log? On Fri, Apr 1, 2022 at 5:37 PM svoinea @.***> wrote: On the PDB page @.***(RCT::desc::,RID) , when I click on the RMB column, a popup windows shows up with the following error: 409 Conflict The request conflicts with the state of the server. Requested column 2 does not exist in table entry. — Reply to this email directly, view it on GitHub https://github.com/informatics-isi-edu/chaise/issues/2167, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADFIDB3CYV3PB3JZHPMETLVC6JELANCNFSM5SKDSFSA . You are receiving this because you are subscribed to this thread.Message ID: @.***> XHR HEAD https://dev.pdb-dev.org/ermrest/client_action [HTTP/2 400 Bad Request 438ms] XHR GET https://dev.pdb-dev.org/ermrest/catalog/99/attributegroup/M:=PDB:entry/F7:=left(Process_Status)=(Vocab:Process_Status:Name)/$M/F6:=left(RCB)=(public:ERMrest_Client:ID)/$M/F5:=left(RMB)=(public:ERMrest_Client:ID)/$M/F4:=left(Owner)=(public:Catalog_Group:ID)/$M/F3:=left(RMB)=(public:ERMrest_Client:ID)/$M/F2:=left(RCB)=(public:ERMrest_Client:ID)/$M/F1:=left(Workflow_Status)=(Vocab:Workflow_Status:Name)/$M/2,RID;M:=array_d(M:*),F7:=array_d(F7:*),F6:=array_d(F6:*),F5:=array_d(F5:*),F4:=array_d(F4:*),F3:=F3:Full_Name,F2:=F2:Full_Name,F1:=array_d(F1:*)@sort(2,RID)?limit=26 [HTTP/2 409 Conflict 357ms] This part of the URL is causing the error: ***@***.***(2,RID) this looks like a Chaise bug of some kind. It should be: @.***(RMB,RID) or something like: @.***(output1,RID) On 4/1/22 17:48, svoinea wrote: |XHR HEAD https://dev.pdb-dev.org/ermrest/client_action [HTTP/2 400 Bad Request 438ms] XHR GET @.***(2,RID)?limit=26 [HTTP/2 409 Conflict 357ms] | — Reply to this email directly, view it on GitHub https://github.com/informatics-isi-edu/chaise/issues/2167#issuecomment-1086433860, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIGPTC4XXDR4FPVNHF57C3VC6KNTANCNFSM5SKDSFSA. You are receiving this because you are subscribed to this thread.Message ID: @.***> This is happening because of the recent changes to optimize the scalar all-outbounds. The code is not correctly handling sort in these cases. While debugging it, I also found that I'm not correctly handling some of the other sort related scenarios for all-outbounds. I'll include more information and test cases for this in the PR that I'll open to fix this. I pushed the fixes for this bug as part of this PR in ermrestjs. @svoinea Can you please test it again and if it's fixed, please close the issue. Closing due to inactivity.
GITHUB_ARCHIVE
When SVGs starting looking weird on your website, it might be because their styles are being overwritten. For the past few months, I've been working on a fun project with a lot of animation and iconography. I used it as an excuse to dive into SVGs, and to attempt to use SVG elements as much as possible. It was going well, for awhile. Then, all of a sudden, some of the SVGs started behaving funny. Their outlines changed color. Part of them were missing. I figured it was clipping masks, and spent some time down that rabbit hole. By the way, if you can, you should expand you objects before exporting to SVG, and avoid attempting to export clipping masks. Its behavior doesn't seem to be consistent. Anyways, I had an AHA! moment when diving deeper into SVGs recently. By default, Adobe Illustrator tries to save space by abstracting the CSS directives and using classes to target them. If you look near the top of your SVG file, you might see something like this: <?xml version="1.0" encoding="utf-8"?> <!-- Generator: Adobe Illustrator 19.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) --> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="1532 -160.8 3264.6 521.8" style="enable-background:new 1532 -160.8 3264.6 521.8;" xml:space="preserve"> <!-- Here's where the artwork goes ... --> And then you'd see .st0 used throughout the SVG. Well, Adobe Illustrator is so generic in its class names that if you build two separate SVG files in two separate Illustrator projects, it's likely you'll have a conflict. And because CSS cascades, whichever SVG is loaded last on the page will be the one that controls the styles for any classes it declares. That's going to lead to unintended results eventually. To get around this, Illustrator provides the option to put these styles inline. Yes, it may make the file a bit bigger, but it's worth it. When you are saving to SVG, just choose Style Attributes not Style Elements. Do that and then you'll see the styles spread throughout and avoid those nasty conflicts! Inlining critical CSS is a breeze for classic SSG sites built and deployed using Netlify. Here’s how it works.
OPCFW_CODE
MATLAB Parallel programming on GPUs, Cores and CPUs What you'll learn - Run Deep learning models in parallel on GPUs - Learn the difference between cores, CPUs and GPUs - Learn the concept of multi-threading in MATLAB with examples - Learn the concept of multi-workers in MATLAB with examples - measuring the performance of each parallel computing code - Learn how to convert your code to parallel computing to increase the performance - Run MATLAB files and functions in the background - Using GPUs to execute and Run MATLAB functions (Excellent performance) - MATLAB basics This course helps students, researchers, and anyone using the MATLAB decrease the execution time they take to execute a program All computers today and the laptops have multi-cores and GPUs. But not all users use the to run or execute the programs in parallel. The purpose of the course is to fill this gap. Is to teach you with practical examples how to use all resources on your computer and also how to monitor them. The course is divided into many sections: The first is an introduction to the hardware of the CPUs, cores, and GPUs. It is better to understand the basic components of these items to be able to get the best utilization when you use them. The second section is explaining two concepts. The multi-threading and the multi-workers. The first is a built-in mechanism to run some functions in parallel using many cores but we can't control the number of cores and the way that the functions execute. The second one (multi-workers) is used to run any function on multiple cores but here we can control the number of cores to optimize the program execution. Also, I explained some examples and measured the performance parameters to differentiate between the two concepts. The third section is the GPU section. In the section, I explained how to run any function on the GPUs to make use of the hundred or thousands of cores that the GPUs have. There are some notations to get the best results and I explained all of these notations with examples. Deep learning and neural networks: in this section, you will learn how to train any neural network in parallel on GPUs or multi-cores. And also how to run the training process in the background in order to be able to use MATLAB while it is running. Who this course is for: - Students, researchers and engineers I'm a Ph.D. student. I graduated from the computer and system engineering department in 2012 and I was ranked the second of my class.. Then, I worked as a teaching assistant for about 10 years in the same department. I also finished the master degree in 2019 in the field of aerospace and artificial intelligence. In the master, I built a model the detect the faults in the temperature control sub-system and then predicted a new values instead of the faulty ones. This model helping in discovering automatically some of the faults and the errors that are difficult to be discovered by human experts. I also worked as embedded system instructor for 5 years.
OPCFW_CODE
We also handle e-commerce to suit your needs! You don't have to have to worry about how parents pay out you and when you will get payment. Within the fifteenth of every month, we will pay you by way of PayPal for expert services rendered in the previous month. I'm wanting to include a delete button on Every row to ensure I'm able to delete a record in the event the button is pressed. I am new to PHP and MySQL and Stack Overflow. Giant bookselling enterprises can faucet into the industry and discover the best of the management process that decides responsibility and likewise makes certain right sales file. The Overlook modifier enables the UPDATE statement to carry on updating rows even though problems occurred. The rows that cause problems like duplicate-important conflicts are not current. Intolerance even results in discrimination, extreme misunderstanding and lack of perform-hours in an organisation. Thus cultural crash practically spoils an organisation's possess focus on. It damages the organisation's picture in public also to the authority. The apparent influence is visible through decreasing brand name loyalty, lower profits and even decreased inventory benefit. If you employ a many-table UPDATE assertion involving InnoDB tables for which you'll find foreign important constraints, the MySQL optimizer could system tables within an buy that differs from that of their guardian/little one relationship. six.Stored Procedure: supports deferred title resolution. Instance even though producing a stored treatment that uses desk named tabl1 and tabl2 and many others..but in fact not exists in database is permitted only in through development but runtime throws mistake Function wont support deferred identify resolution. In this instance, we're going to update the email of Mary Patterson to The brand new electronic mail email@example.com Later We'll talk about new strategies for defending and reacting to this danger. SQL injection is a problem for PLSQL triggers in addition to offers which can be exemplified in area 7.two A website that allows men and women arrive with each other to share stories, pics and video clips is a good java project plan to undertake. You can also make use of various plugins and impress your faculties. Enterprises have to have a technique by which they could control their chain places to eat. Use this System for taking care of your organization very well. This just one method usually takes while in the means undercount and employs them successfully for business enterprise administration. You may get a report with the QEP read more to get a Decide on query using the Demonstrate command in MySQL. This is an important Device to investigate your SQL queries and detect 9 yrs back The 2nd assignment in the subsequent statement sets col2 to The existing (updated) col1 price, not the initial col1 price. The end result is the fact col1 and col2 have the exact same value. This actions differs from typical SQL.
OPCFW_CODE
This code for applying a CBR filter to an ASF Writer doesn't work, why? I am trying to apply a CBR profile to an ASF Writer to reduce latency for a video/audio streaming. This is what I've done till now: Using Media Encoder, I generated a default CBR profile Saved the profile to a prx file Used this code to apply the profile to the ASF Writer: // Initialize a new Profile Manager IWMProfileManager* pIPM = 0; WMCreateProfileManager(&pIPM); FILE * file = fopen("lowprofile.prx", "rb"); fseek(file, 0, SEEK_END); long length = ftell(file); fseek(file, 0, SEEK_SET); wchar_t * buffer = new wchar_t[length]; size_t numRead = fread(buffer, sizeof(wchar_t), length, file); buffer[numRead] = NULL; fclose(file); IWMProfile* pProxProfile = 0; hr = pIPM->LoadProfileByData(buffer, &pProxProfile); // Set the profile for the writer CComQIPtr<IConfigAsfWriter2> pConfigWriter; pConfigWriter = m_pWMASFWritter; hr = pConfigWriter->SetParam(AM_CONFIGASFWRITER_PARAM_DONTCOMPRESS, TRUE, 0); hr = pConfigWriter->ConfigureFilterUsingProfile(pProxProfile); // THIS LINE FAILS, hr = E_FAIL hr=m_pGraph->AddFilter(m_pWMASFWritter,L"ASF Writter"); if(FAILED(hr)) return FALSE; //etc.. What's wrong with this code? I misunderstood something?? Unfortunately for me there's no code in the media format sdk nor available on the internet to help me applying such filter. I am trying to read carefully the documentation available on msdn, but it's surely not as clear as a good code sample. Can someone give me a hint please? That looks ok, I have code close to that working just fine - try and set the profile file to use standard audio/video codecs to see if the code works then just to pinpoint the problem and/or comment out the SetParam call. Also make sure you first add the AsfFileWriter to the graph, then configure it and finally connect the graph. You currently add it only after configuring it - again, that might work, it's just not the order I have running and definitely works. Thank you brokenglass, it was incredibly the adding first ASF Writer to the graph and THEN configuring with the right profile. Now works just fine. Here you are calculating the filesize in bytes: fseek(file, 0, SEEK_END); long length = ftell(file); fseek(file, 0, SEEK_SET); But then you treat it as the size in chacacters: wchar_t * buffer = new wchar_t[length]; size_t numRead = fread(buffer, sizeof(wchar_t), length, file); buffer[numRead] = NULL; You can use the following fix: long length = ftell(file) / sizeof(wchar_t);
STACK_EXCHANGE
Is it a vulnerability to display exception messages in an error page? Our web application has an error page that displays the absolute URL path and query of the page on which the error occurred, the date/time of the error, and the exception message. (We do not display the stack trace. That is an obvious vulnerability.) Is it a vulnerability to display exception messages in an error page? For maximum security, what should we display in an error page? What should we not display? EDIT: My hunch is that it is a vulnerability, but I want to hear an expert opinion. I'd think of it the other way 'round: what potential benefit would showing the error message to the user have? @Simon I think none. Depending on the concrete situation it may be a good idea to have two modes of the application: A development mode which shows all the details and a production mode which display "Ooops, we are sorry an error occurred. Our support team has been notified of this error and will take appropriate actions to fix it." page. Don't confuse users with error messages they cannot do anything about. @Hendrik I prefer using an internal log file, or email if the app is high priority, while displaying a generic error. At best, give them an ID they can correlate to your log. That way, developers get a lot to work on, but attackers get nothing. @Soumya92 Thanks. > "At best, give them an ID they can correlate to your log." -- we are changing our page to do that. It will provide a generic error message, an apology for the inconvenience, the current date, and a number for us to be able to correlate their report to the log entry. @Soumya92, @Matthew - makes sense. The web2py web framework implements that model automatically (you get it by default, without any extra work). It is a nice model. In some cases, error messages induce vulnerabilities. In the area of cryptography, the padding oracle attack can recover a secret key by sending altered encrypted messages, and using an "oracle" which distinguishes between two kinds of failure ("padding was wrong" vs "there was a valid padding, but the resulting data was gibberish"). Detailed error messages have a strong potential for being such an oracle. In a more general, conceptual view, attackers will try to uncover weaknesses by prodding at the system and observing what happens, and more data can only help them. The exception message is not qualitatively distinct from the stack trace, in that respect; so if the stack trace is an "obvious" vulnerability, so must be the exception message too. Path disclosure / path leakage can also be used to fingerprint apps and their underlying servers, in addition to providing enumeration of attack surface. Nearly every web application that utilizes dynamic pages will leak local web server paths, sometimes which contain usernames and obviously vulnerable directories. They can also be leveraged during file inclusion or script inclusion attacks. Special care must be made for PHP/CTP, ASP/ASPX, and JSP/JSPX files in order to prevent path disclosure messages from being easily turned into a file inclusion attack, which can (for example) read local files. Can also turn into XSS attacks... I wouldn't necessarily call it a vulnerability in itself. The exception alone probably isn't enough to breach the security of your server. But it is a weakness that might reveal information that helps attackers, or gives them a foot in the door. For instance, it might reveal information about the code or software version. It might make it easier for attackers to try attacks and learn about how to tweak them to make them work. Also, sometimes attackers are able to put together several weaknesses and combine them to build a full exploit. No one weakness on its own is a vulnerability, i.e., no one weakness on its own would allow attackers to breach the security of your system. But when attackers combine them in a clever way, sometimes they provide enough stepping stones to allow a full-fledged security breach. For these reasons, I'd be inclined to be cautious about these kinds of "weaknesses". It seems safer to try to avoid them, if you can.
STACK_EXCHANGE
How to Avoid the Unified Communications Pilot Trap The Pilot Trap – When Good Intentions Aren’t Enough Ask any enterprise customer, reseller, or system integrator what’s the biggest challenge when it comes to deploying a unified communications (UC) solution, most of them will say multivendor interoperability. UC is not a single product, but a solution comprised of many different elements and components from various vendors, which at times may not play nice with each other. Choosing the proper hardware and device elements, trying to get all the moving pieces to work together, while managing and maintaining the different vendor relationships, can be a nightmare – if not properly managed. It’s not uncommon to find very capable CIOs and IT Directors, leap into the UC pool somewhat unprepared. Often it starts as a simple trial of Microsoft Lync, deployed for Instant Messaging and Presence (IM&P) only on the existing network and with existing workstations. Productivity improves with IM&P and the pilot expands – more people and more features. The true value of UC comes with voice-enabling Microsoft Lync, bringing voice features to the workstation with headsets or IP Phones. The IT decision makers scour the certification lists on Microsoft’s Lync certification site TechNet – using word-of-mouth and social media to help them choose the voice-enablement devices. As the voice trial expands, issues start to arise with features, compatibility and voice quality – often causing many organizations to put their deployment plans on hold, sometimes indefinitely. The result is what AudioCodes calls the “Pilot Trap” – when companies get to the pilot stage of a UC deployment, but get stuck because they didn’t do their homework upfront to really understand what is needed to make a successful UC deployment. Multi-Vendor Environments – The Importance of Playing Nice With Others When presenting my UC Market Overview at conferences, I generally include a slide titled “No One Vendor Does It All,” which shows the various elements of a UC solution, and the many best-of-breed vendors and products in the market. There are some “all-in-one” packages for SMBs, but for mid-size and enterprise-level UC deployments, best-of-breed multivendor solutions are generally the norm. This is especially true in a Microsoft Lync environment, as Microsoft does not provide its own desk phones, contact center application, SBCs, gateways, etc., and works with partners to provide these capabilities. Microsoft Lync is the most widely deployed product for IM and presence, but the majority of Lync customers still use their existing IP PBX/PBX for call control and voice capabilities, which means that a majority of enterprises deploying UC will need to ensure interoperability with their existing switches and networks. While multi-vendor solutions will continue to be the norm, there are some key challenges for both enterprises and resellers: Vendor management – working with multiple vendors becomes much more difficult as the number of vendors grows. In addition, it becomes harder for channel partners to leverage their volume discounts and/or programs; Training – getting fully trained in the various types of equipment with different interfaces and management systems is very time consuming; Maintaining current certifications can also take a lot of time; Getting technical support from the appropriate vendor generally leads to “finger pointing” between vendors trying to resolve issues. Channel partners selling Microsoft Lync solutions face even more challenges, since Lync solutions by design require multiple vendors’ products. AudioCodes, which provides SBCs, gateways, and other products for a Lync environment, recognized that for its channel partners to be more successful, they need something to take away the complexity of multivendor environments, while offering a solution that is easier to deploy and manage. AudioCodes introduced One Voice for Microsoft Lync, which lets reseller partners build voice-enabled Lync-based solutions using AudioCodes gateways, SBAs, and AudioCodes IP phones. AudioCodes One Voice for Microsoft Lync The company describes AudioCodes One Voice for Microsoft Lync as “a comprehensive program that encompasses the major network elements and application solutions required to successfully implement voice communications with Microsoft Lync.” One Voice includes not just the hardware pieces, but also professional services and customer support packages offered through AudioCodes reseller partners, helping to reduce the time it takes to deploy a UC solution. The network elements of One Voice include a variety of AudioCodes products, including a new line of Microsoft Lync-certified IP phones, Mediant Enhanced Media Gateways, Mediant Enterprise Session Border Controllers (E-SBCs), SmartTAP Recording, and AudioCodes Session Experience Manager (SEM), which are all tied together with professional services, including a network assessment, planning, design, implementation and optimization services, as well as support services provided through AudioCodes partners. According to Alan Percy, AudioCodes Senior Director of Strategic Marketing, NA, by working with AudioCodes IP Phones, gateways, E-SBCs, SBAs, and application software for Lync implementations, end-customers can reduce the number of vendors that need to be managed, making Lync deployments easier for both end user customers and channel partners. Percy states that working with AudioCodes’ common and unified products reduces the time and effort to gain and maintain technical competence, and channel partners don’t have to learn the various user interfaces, technical details, terminology and diagnostic procedures for a number of different vendors. The AudioCodes phones, gateways, etc., all have common management interfaces and protocols, using essentially the same management system across all the products. By using a common software core of the various AudioCodes products, interoperability issues are reduced, especially with management and QoE systems that would otherwise have to integrate to numerous different vendors. Also, by working with one vendor for all the hardware components in a Lync deployment, third-party conflicts, incompatibility issues, and vendor finger pointing are reduced. Avoiding the Pilot Trap By providing access to not only a set of integrated network elements needed for a Lync deployment, but services such as network assessment, network planning, optimization services, etc., AudioCodes hopes that customers will be better prepared before doing Lync deployments, increasing their chances of success. Reducing the challenges of multi-vendor environments, particularly interoperability issues, is a huge step toward helping to grow the UC market. UC is still not simple nor plug-and-play, but by reducing the number of vendors and network elements, especially for a Lync solution, AudioCodes is helping partners and customers reduce the complexity, while increasing the likelihood of moving from UC pilots to full-blown successful enterprise-wide deployments.
OPCFW_CODE
My name is Peter Turányi and I was born in 1972. Then came beautiful childhood time but there was not any reference on Spectrum. I saw the computer on excursion when I was in last school form for the first time. It was the "famous" computer PMD-85 (made in Czechoslovakia). I played Penetrator, Frogs, Bombarder and I was lost. I joined a computer club and spent amazing time with friends in a leisure time center. After my basic school in 1987 I started grammar school in Povaská Bystrica. By chance my classmate Robert ustek had a Sinclair ZX Spectrum - Delta. Thank to him I could familiarize with it and played Ikari warriors, Commando and other games. From this moment I wanted Spectrum! I asked my father to get it - and best 128. Unfortunately he brought me the Commodore 64. After I had unpacked it I plugged it and tried to communicate with it. But when I got something like "Error 04" after I had typed and run primitive three line long program, I packed it and said father "Thanks no more". So I spent my free time in computer club, formed in building of "National front" (political organization). There were not only PMD computers but also three Atari 800XE. So by programming on PMD we had good craze with River raid and Karateka. Situation changed in second form when Robert decided to sell his Delta. I did not hesitate any more. I "smashed my pig" (saving box) and asked father for help again. So Delta was moved to me. I started learning secrets of Sinclair basic and looking at machine code. I had not any problems with programming at school so the lessons bored me. My teacher of informatics was very angry with it and finally she asked me what I would have liked to do. "Machine code" was the answer without hesitating. She gave me few hours after lessons and lent me a book about machine code of 8080 processor. Thanks to it I came to instruction set of Z80 processor. First time I did not understand anything. I began investigate programms of other programmers and did experiments. Step by step I got it. So in 1991 I can write my first demo Music supercode 1. Before that I had already made some programs in basic. The computer club moved and under its wings rose Sinclair-club and I was established as a boss. Since that time the dust has been falling on Atari in corner. That year I started studying at Slovak technical university, faculty of electrotechnics. But not informatics. I had to be satisfied with the material engineering. In spite of it I met many new people and Perfekt company which was later renamed to Perpetum. Thanks to some colleague I got cheap D40 (diskdrive made in Slovakia) at the end of second term. I remember my endeavor to buy some diskettes before I started returning home for holidays to try it. I spent holidays saving my software collection at diskettes. After that I could save anything on diskettes and I made money by converting multilevel games. I spent this money to buy another floppy drive D80. My hardware park was extended with music interface Melodik with AY-chip (made in Slovakia) and I run demos again and again with great amazement. I also longed to do it myself. First I trained myself with AY-driver and after I had known enough to manage Sound tracker I created my first AY demo. My programmer's capability rose with the help of a book "Assembler & ZX Spectrum" that extended my knowledge I had gained by self-study. D40/80 opened the door to world of other computer world data and university opened the door to the world of internet. I started visiting IRC channel #Z80 and downloaded various data from internet which I transferred them to my Spectrum. There were mainly texts so I created convertor from T602 (PC czechoslovak text editor) to Desktop (Spectrum text editor). After my friend Rado Benda (Atari 1040 ST user) had showed me quantum of graphics I started downloading pictures too. I was especially interested in animations on ST. I used some of them in my next demos. With a little of the graphics I improved game by T.S.S. - "Mind crosswar". But first of all I had to create new program I named it ST-linker to be able to work out these images. Later I improved this program to such an extension that it became fully new programm I named it Di-vision. Well when the Sample tracker came I needed some samples and because there were many samples for PC I made Sample convertor. I'm not gambler. I can say I am even fastidious. All the time I have been having Spectrum I have never had more than 20 cassettes which I converted to almost 40 diskettes. But after some time I lost view and I needed to make some list. I felt so ingenious that I wrote Neolite. After my gold mine ftp://nvg.unit.no finding I had no more problems of getting programms again. But many programms were "dirty" snaps and I had not PC. So I made snap launcher for D80 and I originally named it ZX emulator. With a respect to real emulators I first used Luntner's Z80 but today I use only RealSpectrum. My Spectrum career ended with conversion of games Minesweeper and Solitaire. I finished them before I went to my military service. I started writing National flags in assembler code but its realisation was "broken" because I got married. When I moved off from my parents I simultaneously lost TV set needed as a screen for my Spectrum so almost all my activities on Spectrum stopped. In 2002 I bought PC which I still have and at least I could return to Spectrum by help of emulator. I have great planes. Besides finishing National flags I have some ideas for original games inspirated by PC games (Alley cat), TV contests (Wheel of fortune - but I probably missed the train, I think) or board games (Turf and stake). I am short of time and the rest of it I spend for converting books on ZX into electronic format. We will see what will come true. I have also other hobbies. I like reading books (mostly SCIFI) and magazines, Jung psychology and tourism. Now you can contact me at conference, or e-mail address softhousesk(a)gmail(o)com.
OPCFW_CODE
What services are offered? Results reproduction before submission You created a computer-based workflow as part of your latest paper submission? Great! Now it is time to make sure that your computational steps can be understood and recreated by a third party before you submit the workflow for peer review or publication of a preprint. Such an independent confirmation increases trust in your work. During results reproduction, an R2S2 team member will evaluate the data and code provided by you (file organisation, documentation, code understandability, etc.) and follow the provided instructions to execute the workflow. We will then report on our results, compared to the ones provided by you, and give general feedback on what you may improve to further increase accessibility, understandability, and reusability for third parties. For more on the general feedback, see "Research compendium creation" below. - We will not replicate your study (collect new data, use same analysis/code), make it robust (same data, different code/analysis), or generalise your work (different data, different analysis) (see The Turing Way’s definitions [en]). - The reproduction does not cover the content of your paper, such as checking your research methodology, questioning your conclusions/assumptions, or making direct changes to the work. - To be able to reproduce your workflow, your workflow or scripts must be based on software/programming language that is available to us, ideally free and open source software. Exemptions may be made for software available to employees of Münster University or where you can provide access for the R2S2 team member./span> - No High-Performance-Computing (HPC) or Big Data: the R2S2 team does not have sufficient resources to reproduce very complex and extensive computations; please do get in touch if you are unsure if your data is "too big" or if you want to make a case why we should consider your workflow. - Duration of computations should be under 24 hours on a reasonably sized desktop computer – consider creating a synthetic dataset or subset if this is the case; please do get in touch if you are unsure if your computation takes too long or if you want to make a case why we should consider your workflow./span> Project set-up consultation and computing environment management You can make your life and the life of collaborators (e.g., future you or the next PhD student) much easier if you consider reproducibility from the start of a research project. If there are no experts on computational reproducibility or open science at your lab/institute/working group, we are happy to have a conversation with you about your ideas and plans. Let’s try to look ahead into the future and see how you can not only avoid to shoot yourself in the foot, but be very efficient in your day-to-day work habits and score extra points with reviewers and become a reproducible research leader in your community of practice. You already write R/Python packages and know about notebooks, virtual environments, Binder, version pinning, containers, and virtual machines? Get back to us for a results reproduction! Research compendium creation A research compendium accompanies, enhances, or is itself a scientific publication providing data, code, and documentation of a scientific workflow (cf. research-compendium.science [en] for more literature). It provides all materials for others to reproduce, re-use and extend a particular dataset or method. The term has been used in various disciplines to describe the desirable "package" of bits and pieces that make up the real scholarship, for which the article or papers is the "mere advertising" [en]. If you want to practice reproducible research and open science based on computers, a research compendium is a great approach to package computer-based methods for yourself and for sharing them with others. As a result of your consultation, you create an archivable package with all information in one place which is ready to be published in a repository and receive a persistent identifier (e.g., a DOI). The compendium is created based on your current material. During the consultation, R2S2 team members will provide suggested code edits to facilitate reproduction and to ensures the transparency and reproducibility of your research, a high ease of access to data and code for others, and independent understandability by others. A research compendium creation may include: - research data publication - research software publication - computing environment definition and publication - citable data - citable software If your research compendium includes data, software, a containerised computing environment, and the possibility for users to manipulate parts of your workflow, then we would like to explore with you the possibility to create an Executable Research Compendium (ERC). The ERC is o2r’s own concept of the "research article of the future". You can learn about it in this publication about ERC [en] and see it in action in this reader’s perspective video [en]). Even if you decide not to publish/submit the research compendium with the article (if journal policies permit) nor after publication of the article, you have the ability to promptly and confidently provide reproduction materials if reviewers or future readers ask for them. - We may take a look at the paper, but only in so far as how it connects to the other building blocks of your compendium – the same limitations of results reproductions (see above) apply. Of course, we will not share anything before your research is published or your explicit confirmation. - It may be more suitable to deposit research data and research software independently, or additionally, in different repositories – we try to find the best solution for your case together with you.
OPCFW_CODE
Does Laravel Lumen runs on embedded machines? I need to develop a web interface for an embedded system and like to know if Laravel´s Lumen runs on tight memory/disk/Cpus. I don´t want to build plain HTML and I´m looking for a "serious and modern" framework to do so. My current configuration is Vortex 800MHz processor/256Mb RAM and 1G disk. I´m running Sqlite3 as my database and some runtime C++ processes that takes about 20% of CPU. I[m running Ubuntu 12 on these units. Lumen is going to be used to build the management interface, with no more than a few connections a day (very low usage). Does anybody has experience running Lumen on that configuration that can be shared with me ? Thanks for helping. Is 256 KB right? I'm guessing you meant MB because I doubt you could run Ubuntu in 256KB, let alone a web server. How much free RAM do you have? Yes, 256Mb. 256Kb today not even if you want to... Thanks for the correction... I don´t know for now how much memory I have free as I haven´t installed the unit yet. I´m developing in a VM with 2Mb now. I will be installing a minimum Ubuntu version with no user interface. My stack will be minimal Ubuntu, Sqlite3, Apache2, Php and a framework (Lavarel Lumen?) If you need something small and super fast you may want to check out phalcon. Although it's not as feature-rich as laravel, and personally I don't like it as much, it sounds about right for your needs. Bad news, I don't think you can do it. RAM My basic LAMP stack with Laravel runs in ~200mb of RAM, so Lumen should run fine. However, updating or installing Laravel via composer can use up to 512mb of RAM. I know Lumen is Laravel's little brother, so you might not need as much, but you'll definitely need some. You could get around this by using a swap file, but your swap file would need to be a least 250mb, if not more. Unfortunately, you just don't have the disk space for a swap file any larger than that. Disk Space I'm going to assume your 1G of disk space is actually 953mb because of base-10 to base-2 conversion. According to the docs, Ubuntu 12 requires ~500mb for a bare minimum install plus 500mb for the rest of the normal packages. I'll assume you can get away with the bare minimum of 500mb, mostly because I don't know what the bare minimum includes. You might need more. Apache 2.2 requires 50mb during install, but only 10mb after that. My clean install of Lumen is 28mb. Sqlite is ~1mb. I couldn't find a reference, but PHP is probably another 10mb. So being extremely conservative, Ubuntu takes 500mb, Swap is 250mb, Apache is 10mb, Lumen is 28mb, Sqlite is 1mb, and PHP is 10mb for a total of 799mb. That leaves you with 154mb for extra packages required by those things, and various file downloads and expansions that occur during install. I'm sure I'm leaving stuff out, and I'm sure you'll have to clear the apt cache after every install by running sudo apt-get clean. You might also need to install the biggest stuff first and not create the swap file until you absolutely need it. Overall, I think your best option is to spin up a VM with your hardware specs and try it out. Good luck, and report back with results. Building the VM is on my plans.. Thanks for detailed info... I think even if I can install it all it will be too dense for such a small hardware... I need some performance also, so I have really to think about it all...
STACK_EXCHANGE
During this year’s EvoLang conference, a book was launched with perspectives on the last conference. The past, present and future of language evolution research (McCrohon, Thompson, Verhoef & Yamauchi, 2014) is a volume of student responses to EvoLang9 in Kyoto. It includes basic reviews and criticism, synthesis of current approaches, experiments and sociological perspectives. It makes for interesting reading. What comes across in all the papers is a drive for collaboration and integration of fields and ideas, as the diagram from the contribution by Barcceló-Coblijn and Martin shows. These are serious attempts to understand what has been learned so far and find new perspectives that incorporate empirical evidence. Many papers see neuroscientific evidence as a key to expanding many areas of research. An electronic version of the book can be downloaded at the Evolang Website (although at the time of writing, the website was down). Below, I review the chapters to give a flavour of the book. Junko Kanero reviews vocal and gestural theories of language origins, and asks whether a vocalisation-only theory is testable. They suggest that recent advances in neuroscience will be able to test whether gestures and spoken language share common neural substrates. Cory Cuthebertson compares claims of abrupt versus gradual evolution of the language faculty in the proceedings of the first EvoLang conference in 1996 and the 2012 conference. They observe a broadening of approaches between the two, but theories have become less unified and the discussion of sudden versus gradual evolution has become much less explicit. Only two papers assumed an abrupt evolution of language. Xiaoxia Sun and Uwe Seifert discuss the origin of language and music, and whether they evolved separately, in sequence or together. They suggest that neuroscientific approach should also be able to differentiate between whether there were shared cognitive resources that served both language and music, but then split. They review existing neuroscience evidence and point out a gap in the literature: there should be more experiments with non-Western languages and musical styles. Tessa Verhoef reviews laboratory experiments of cultural evolution. Literature from computational neuroscience is used to suggest an integrating framework for studying cultural evolution and the brain by discussing compression and efficient coding. Seán Roberts and Justin Quillinan challenge the keynote lecture by Matsuzawa on working memories in chimpanzees. In a visual memory game, the chimpanzee Ayumu outperformed a small sample of humans. Roberts and Quillinan replicated the human study online and found participants who performed as well as Ayumu. Saccade distance and the fidelity of the ordering of numerals across the screen were predictors of performance. While no support was found for Hurfrod’s suggestion of a trade-off between auditory working memory and visual working memory, they suggest this could be tested in more detail in the future. Rie Asano contrasts studies which compare human cognition to animal cognition with studies which compare different human cognitive systems (e.g. language and music). They argue that these address two separate domains –human uniqueness and language uniqueness, and this distinction should be incorporated into Hauser, Chomsky and Fitch’s distinction between FLN and FLB. Anne van der Kant reviews inter-species comparative work and suggests how non-invasive neuro-imaging can be applied to birds and primates to perform longitudinal studies of vocal learning. Caroline Green uses results reported at the 2012 EvoLang to hypothesise that FOXP2 mediates the relationship between finer auditory sensitivity and finer motor control of vocal production to allow vocal learning. They suggest an experiment that could elucidate this further. Michael Pleyer discusses views of language as a complex adaptive system, and notes that the ontogenetic, glossogenetic and phylogenetic elements are complex adaptive systems in their own right. They suggest that cognitive-functional and usage-based approaches can help synthesise these timescales by emphasising properties of language use and social factors. Marisa Delz and Johannes Wahle review studies of language evolution in networks from the conference. They synthesise the approaches under a single concept of embedded networks that they call Multiple-Network-Population. They suggest that it should be possible to compare and contrast studies that find social networks to be important for language evolution under a unified framework. Mauricio Martins, Archishman Raju and Andrea Ravignani review critical issues in quantitative modelling approaches to language evolution. They call for more rigorous scrutiny of assumptions in models and better validation against real data. Researchers in language evolution should learn from similar techniques in other fields. They suggest that quantitative modelling should be an intermediate step between theory and experiment. Lluís Barceló-Coblijn and Txuss Martin argue that approaches from evolutionary biology should be used to identify the unique aspects of language and study its evolution. Unlike the recent paper by Hauser et al., they see the best approach as involving the integration of many other disciplines such as anthropology, psychology, philosophy, neuroscience and modelling. Dillon Niederhut argues that neuroscience should be more central to the field. They review how methods in neuroscience can generate, constrain and test hypotheses of language evolution. Christian Bentz discusses how theories from the EvoLang conference could be unified into a cohesive picture. They present the concept of a ‘language helix’ which captures the idea of language as a complex adaptive system (similar to Pleyer) where regularities in language are constantly being inferred, ‘unfolded’ and inferred again at different levels and timescales. Richard Littauer, Seán Roberts, James Winters, Rachael Bailes, Michael Pleyer and Hannah Little provide a sociological review of the conference, and note the increasing role of technology. Participation of EvoLang 9 extended outside of Kyoto via the internet through twitter and blogging, and an increasing number of people were connected to the internet during talks, making it possible to fact-check claims immediately. They also discuss the role of blogging and academic publishing for the future of the field. McCrohon, L., Thompson, B., Verhoef, T. and Yamauchi, H. (2014) The Past, Present and Future of Language Evolution Research: Student volume following the 9th International Conference on the Evolution of Language. Tokyo: EvoLang9 Organising Committee.
OPCFW_CODE
Unable to find library t:\>vcpkg list | rg zlib zlib:x64-windows-static 1.2.11 A compression library zlib:x86-windows-static 1.2.11 A compression library t:\>vcpkg_cli probe zlib Failed: Could not find library in vcpkg tree Unable to get OUT_DIR t:\>vcpkg_cli -t x86-windows-static probe zlib Failed: this vcpkg build helper can only find libraries built for the MSVC ABI. t:\>vcpkg_cli -t x86-windows probe zlib Failed: this vcpkg build helper can only find libraries built for the MSVC ABI. (If you're here because of an nmake error in libz-sys on nightly, maybe this is interesting: https://github.com/alexcrichton/libz-sys/pull/18) If you are looking at the git version of libz-sys, the vcpkg changes have not been published on crates.io yet. Sorry, the messages are pretty unclear. If you are using the crates.io version of vcpkg_cli, I probably need to make a new release, but here is how it works : PS C:\Users\jim\src\rust\vcpkg-rs\vcpkg_cli> cargo run -- probe zlib -l static Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs Running `C:\Users\jim\src\rust\vcpkg-rs\target\debug\vcpkg_cli.exe probe zlib -l static` Found library zlib Include paths: C:\Users\jim\src\vcpkg\installed\x64-windows-static\include Library paths: C:\Users\jim\src\vcpkg\installed\x64-windows-static\lib Runtime Library paths: Cargo metadata: cargo:rustc-link-search=native=C:\Users\jim\src\vcpkg\installed\x64-windows-static\lib cargo:rustc-link-lib=static=zlib Found libs: C:\Users\jim\src\vcpkg\installed\x64-windows-static\lib\zlib.lib PS C:\Users\jim\src\rust\vcpkg-rs\vcpkg_cli> cargo run -- -t i686-pc-windows-msvc probe zlib -l static Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs Running `C:\Users\jim\src\rust\vcpkg-rs\target\debug\vcpkg_cli.exe -t i686-pc-windows-msvc probe zlib -l static` Found library zlib Include paths: C:\Users\jim\src\vcpkg\installed\x86-windows-static\include Library paths: C:\Users\jim\src\vcpkg\installed\x86-windows-static\lib Runtime Library paths: Cargo metadata: cargo:rustc-link-search=native=C:\Users\jim\src\vcpkg\installed\x86-windows-static\lib cargo:rustc-link-lib=static=zlib Found libs: C:\Users\jim\src\vcpkg\installed\x86-windows-static\lib\zlib.lib I just played with vcpkg_cli version from crates.io, yes. If you wouldn't mind, can you tell me a little about your configuration? Are you actually intending to pick up libraries from Vcpkg at this point or is it accidental? (It finds the tree if you have VCPKG_ROOT set or you have run vcpkg integrate install.) What version of the compiler are you using and do you have any RUSTFLAGS set? Do you have your vcpkg configuration available in a public repository that I could take a look at? Yes, I have VCPKG_ROOT and I do not use vcpkg integration. I have VS 2015 and 2017 (guess, Rust uses the first one by default). I have no RUSTFLAGS. As my vcpkg - it is a default "nightly" vcpkg with the only exception: I use my own curl port, decause the only I need - it is HTTP(s). You have to be on nightly Rust, right? Otherwise you shouldn't have seen the nmake breakage... Is the port called "libcurl" or did you give it a different name? Are building a mixed C/C++/Rust/something application, or is it just Rust? (If it's just Rust, rust-curl's built in curl is I think configured exactly as yours is, but I believe it will be statically linked so it would probably be easier just to use that.) If you are building a mix of Rust and something else and you want it to all use the same curl then using Vcpkg/vcpkg-rs should be a pretty good solution. Thanks a lot for the information on this. Otherwise you shouldn't have seen the nmake breakage... Why? cargo +stable build for curl-sys results the same error as +nightly. Is the port called "libcurl" No, it is the same as original curl, just stored in different git branch. or is it just Rust? It is essentially cargo install-update --all, it fails on cargo-update command (while building git2, which requires curl). By the way, back to the subject issue: $ cargo install --list | rg vcpkg vcpkg_cli v0.2.1: vcpkg_cli.exe $ vcpkg list | rg zlib zlib:x64-windows-static 1.2.11 A compression library zlib:x86-windows 1.2.11 A compression library zlib:x86-windows-static 1.2.11 A compression library $ vcpkg_cli probe zlib Failed: Could not find library in vcpkg tree VCPKG_ROOT\installed\x64-windows\lib\zlib.lib $ vcpkg_cli -t x86-windows-static probe zlib Failed: this vcpkg build helper can only find libraries built for the MSVC ABI. $ vcpkg_cli -t x86-windows probe zlib Failed: this vcpkg build helper can only find libraries built for the MSVC ABI. Oh yeah, sorry - I pushed that release for you but I forgot to mention it. The -t is a Rust target triple. Try: vcpkg_cli probe zlib -l static vcpkg_cli -t i686-pc-windows-msvc probe zlib -l static and it should find both of those. There is currently no way to make vcpkg_cli call .lib_names("zlib", "zlib1") on the Config so even though the libz-sys build script can find the dynamic variant there is no way to verify that from the command line. It claims it's working with package names, but it is actually using that as the stem and adding .dll or .lib to find the libraries right now. I'm reworking it to actually work with packages. If you have a moment, I'd be interested to see what you get from git clone https://github.com/mcgoo/vcpkg-rs cd vcpkg-rs\vcpkg_cli git checkout deps cargo run -- deps curl I get: [...] ============================= required port ("curl", Port { dlls: ["libcurl.dll"], libs: ["libcurl_imp.lib"], deps: ["zlib", "openssl", "libssh2"] }) required port ("libssh2", Port { dlls: ["libssh2.dll"], libs: ["libssh2.lib"], deps: ["zlib", "openssl"] }) required port ("openssl", Port { dlls: ["libeay32.dll", "ssleay32.dll"], libs: ["libeay32.lib", "ssleay32.lib"], deps: [] }) required port ("zlib", Port { dlls: ["zlib1.dll"], libs: ["zlib.lib"], deps: [] }) Failed: Could not find library in vcpkg tree C:\Users\jim\src\vcpkg\installed\x64-windows\lib\curl.lib The error at the end is because I'm not done. Hopefully it will parse your installation correctly and you will get something like: required port ("curl", Port { dlls: ["libcurl.dll"], libs: ["libcurl_imp.lib"], deps: [] }) Thanks for the walkthrough on what you are trying to do, by the way. I looked up the cargo-update tool - that looks pretty neat. The original intent of this vcpkg-rs work was to link statically by default, but that turned out not to be possible. Unfortunately, because you are trying to cargo install things, and using vcpkg-rs favors dynamic linking, even if the build completes, the resulting binary will not run because it won't be able to find curl.dll. It is possible to do static builds if you set RUSTFLAGS=-Ctarget-feature=+crt-static and compile with nightly (the initial intent of this work was to make it easier to install diesel_cli). It seems like it would be easier to just disable vcpkg-rs with NO_VCPKG=1 in your environment. Then, assuming nmake/jobserver issue is now fixed you should be good to go. This is a disapointing result for me - I was hoping to have it enabled by default, but I think it will have to require a variable to be set to enable it. Sorry again for the breakage. Thanks for detailed explanation. Yes, now it finds a static zlib (but not zlib1.dll, as you warned above). Also it is a bit confusing to have "static", but "dll" instead of "dynamic": error: 'dynamic' isn't a valid value for '--linkage ' [values: dll, static] As for lib_names — I think, it is possible to have it in command line tool as well. Since vcpkg_cli is intended for vcpkg feature exploration and testing, it would be nice to have all features covered by command line. Or may be no. And the last thing: I am worrying about NO_VCPKG, would not it be confusing with official VCPKG_* variables? May be it is better to use RUST_VCPKG_* format? Anyway, thanks for support. cargo install -f cargo-update now results in a working binary for me. You should see it pick up vcpkg 0.2.2. Thanks for the comments about vcpkg_cli - I'll try to remove some of those surprises shortly. I changed the build help specific environment variables to all have a VCPKGRS_ prefix to try and keep out of everyone elses' namespace. Could you let me know if this unbreaks your setup?
GITHUB_ARCHIVE
Alternative to NGINX That Makes Your Life Easier: Apache APISIX API is an essential part of the digital world, and the API gateway bears the heavy responsibility of protecting its security and stability as the first gate of API. Many software engineers and teams used NGINX before but were annoyed by the bottlenecks and restrictions of NGINX. Is there a better alternative? A great alternative to NGINX is Apache APISIX, then what is Apache APISIX? What's Apache APISIX Apache APISIX is a high-performance, dynamic, full-traffic API gateway. The four notable feature of Apache APISIX: - The Apache property: APISIX is open source and is also a top-level project of the Apache Software Foundation. It is impossible to modify the open source license midway, as ElasticSearch and MongoDB did. Because APISIX belongs to the Apache Software Foundation (ASF), it is no longer a company or an individual's project. - High performance: APISIX is developed based on OpenResty (an NGINX distribution), so APISIX also inherits the power of NGINX itself. - Dynamic: If NGINX provides a robust underlying architecture, OpenResty adds more possibilities to NGINX by allowing the use of Lua to control the behavior of NGINX. And APISIX, with its flexibility and strong connection to other systems, becomes a fully dynamic API gateway. - Real-time: APISIX stores the configuration in etcd. This advantage is that configuration changes can be monitored and obtained in real-time through the etcd RESTful API, because etcd itself is a distributed KV database, which is also used by Kubernetes to store the configuration. NGINX uses static files to store the configuration, and if the configuration is updated, the time to reload NGINX will be very long. Apache APISIX vs NGINX Since we mentioned that APISIX is developed based on NGINX, you may ask: what is the difference between APISIX and NGINX? The first thing to note is that the comparison between APISIX and NGINX is not an apple-to-apple comparison. After all, NGINX is a lightweight proxy, and APISIX focuses on making the product's functions more mature. Also, it has more features because APISIX is redeveloped based on NGINX. If you are using NGINX as a gateway, the following two advantages of APISIX will give you a more profound feeling. More flexible configuration Compared with NGINX's configuration files, APISIX provides various ways to configure. For example: - You can configure APISIX via HTTP API. The configuration will be written to etcd, and then synchronized to each node by etcd; - You can do it with APISIX Dashboard. In APISIX Dashboard, the graphical visualization will help you configure more clearly; - If you don't want to use stateful storage methods such as etcd, you can use static files like K8s. APISIX also supports getting individual configurations from local YAML files; - If you deploy APISIX in K8s, you can use the APISIX Ingress Controller to obtain the configuration issued by CRD; - If you deploy APISIX as the data plane of Istio, you can also get the configuration issued by Istio by identifying xDS. Although NGINX also introduced NJS to achieve dynamic control, it is not as ideal as APISIX's extensibility. As APISIX can be extended with LuaJIT, it also supports the out-process Plugin Runner to run external plugins written in languages such as Go, Java, Python, Node.js, etc. In addition, starting from APISIX 2.11, you can run the Wasm plugin. With this function, you can write plugins in APISIX in Rust, TinyGo, and other languages and then compile them into Wasm code to run on APISIX. Configuring Wasm plugins and Lua plugins in APISIX shows almost no difference in functionality. As a result, it can achieve performance similar to Lua's native implementation and achieve the development efficiency of high-level languages. What are the key benefits of Apache APISIX The advantages described above are pretty good, but they are not the most critical advantages of APISIX. The most significant advantage of APISIX is its ecosystem network interwoven with many projects. - At the authentication level, APISIX supports protocols such as OIDC and LDAP. At the same time, it can be integrated with multiple authentication services or frameworks, such as Keycloak, Casdoor, Casbin, OPA, etc. - At the observability level, APISIX supports the connection with multiple log tools, such as Clickhouse, Datadog, Splunk, Apache Kafka, Apache RocketMQ, etc. It can also expose rich metrics through Prometheus to support multiple tracing systems, such as OpenTracing, OpenTelemetry, and Apache Skywalking. - At the service discovery level, APISIX not only supports obtaining upstream addresses from Nacos, Eureka, Consul, and Zookeeper, but also from DNS (whether through A/AAAA records or SRV records). Furthermore, if you use APISIX as a K8s Ingress Controller, you can get the corresponding configuration from the Ingress resource (APISIX supports the K8s Gateway API specification). The current APISIX is still in the stage of rapid development. Over time, APISIX will be integrated with more and more projects, opening up more possibilities for cooperation and greatly simplifying the workload of integrating with existing systems. Suppose the service you want to connect to is not in the APISIX plugin ecosystem. In that case, you can directly use the existing plugins for custom development, achieving functions more specific to your business. Which API Gateway should you go for Of course, to choose a suitable gateway, you also need to consider your actual business situation. If you are already using NGINX as a proxy in front of business applications, and some logic is placed on NGINX, then APISIX will be your best choice. As APISIX is developed based on NGINX, you can smoothly migrate NGINX to APISIX based on your needs. If you have never used a gateway and want to choose a suitable open-source API gateway project based on your team's situation, then you need to focus on the following aspects: - Whether the frequency of updates is good enough. You can choose a well-maintained API Gateway project by observing the activity of each project because no one wants to select a project that goes downhill. You can get a sense of the project activity through the Contributor Over Time graph. - Whether the functions of the project are complete. If the selected gateway cannot meet the team's current and future business needs, and because the project adds to the development work (such as configuration management, docking with internal services), please consider carefully. - Whether some of the project's complex technical metrics perform well, say, whether QPS, latency, and memory usage meet business requirements. Generally speaking, it is challenging to make subversive optimization of a gateway. Therefore, if a gateway cannot meet these complex metrics, it will be difficult to make breakthroughs no matter how it is iterated later. - Whether there is enough workforce and time in the team to learn and maintain the API Gateway. After all, technical decision-making is not a purely technical activity. Of course, if you have used other API gateways, but the gateway cannot meet the current business scenario, then you can use Apache APISIX as one of your options. How to migrate from Nginx to Apache APISIX You are wise if you see this and decide to replace your existing NGINX with APISIX! But before your migrating, you need to brush up on some of the product features you have or are using. Usually these features can be divided into three categories: - Directly replaceable. APISIX allows users to use NGINX configuration directly, so most of NGINX's global configuration can be reused by APISIX. As for application-level configuration, it can be replaced by APISIX Routes; - Requires adjustments, such as changes in metrics; - Requires additional development After completing the required development, you would gradually replace NGINX with APISIX in actual business scenarios. During the smooth migration process, you need to consider the following three questions: - How to proxy client requests to APISIX? - How to put the equivalent configuration into APISIX and NGINX? - How to handle both APISIX and NGINX exposed metrics? You must consider the above three questions and your actual application environment. Finally, don't forget to prepare the "rollback plan for bugs" in advance. Through this article, I believe that you have understood the power of Apache APISIX. Let's try using Apache APISIX as your API gateway!
OPCFW_CODE
I want to locate a shortcut links that works without captcha during the passage of members and also pays Bitcoin and accepts multiple visits from the same address As for the duration of the payment is better than one day to 7 days I would like a website written. Firstly the user must be able to register supplying First name Last name Email Password Also on registration page include a captcha. Once they register I would like a confirmation page and on this page I would like a unique 7 number identifier, I would also like a confirmation of their registration and the unique number ...workflows/approvals based on form fields (approx 6 rules) 3. Publish Form to customer portal 4. Restrict form to only individuals with @[Logga in för att visa URL] user accounts. No public access 5. Captcha Customer On boarding 1. Create Jira form with 30 fields 2. Create workflows based on form field values (approx 6 rules) 3. Send subsequent forms based on department/stakehol... Hi i need help to creat a wordpress site with totally 4 pages. Ours services 1 Our services 2 Faq Contact form with CAPTCHA I got a link to a site that are similar to the layout that i want Please save both yours and mine time and only bid if you can do it quick and to the fixed amount that you gave in the bid. Thanks in advance. The web site is a direcory of professionals (lawyer and notaries) in France, i need their email adresses (in csv or mysl). No logon, no captcha are necessary, but a research by name (each request has only 300 results on several pages), research should be 'begin by aa', then by 'ab', 'ac',etc...'zz' to have all contacts (30K). please provide source cod... We have a client which we manage, and are seeking expert web developers to make changes to a website through PyroCms. We have the cpanel system we would like to edit some of the footer information on all pages AND add a CAPTCHA image verification tool to on 2 x contact forms ...first proxy and continue checking. 2 - Attempt to login to the site with the list of Email:Passwords that are given. Captchas sometimes come up so it will need to bypass the captcha. If the account is invalid, add it to a text file called "invalid" and make the text file update after every invalid account is encountered. 3 - If the account could not login We have an eCommerce site built with Joomla 1.5.23 and VirtueMart 1.1.9. We have not been able to upgrade to newer versions due...VirtueMart 1.1.9. We have not been able to upgrade to newer versions due to numerous reasons. Because of spamming we had to disable customer self-registration. We need now a captcha in our registration form as a quick fix. someone to add captcha USING WEBPLUS x6 website plus delete and add a few words to [Logga in för att visa URL] ...should display “e-Payment Authorization Failed.” 6. Security: Account Order page (AND the Submit Order action) must be secured so that a login is required. Validation using CAPTCHA for first registration. 7. Order Process: Creates a purchase order including shipping, taxes, total amount due based on shopping cart info Creates an account with name I am chinecherem....expert who can help generate minimum of 10 sign up through my referral link. my referral link is just about a faucet site for claiming free bitcoin every hour by solving captcha and clicking on "roll" button please if you cannot provide active leads please dont show interest on my project. duration for delivery:2 to 5days interva ...on a calendar. copy the Image Captcha And Confirme the pop-up that shows up This part must be very very very fast (a lot of people and maybe other programs try the same job). This operation must be done at a specific time, only at that time calendar is available. Re-login to the accounts automatically if the website disconnected us .The script must Simple online tool to cut image and merge with templates. Created with bootstrap v3, PHP 7.x, fully file based with no db dependencies Fix minor design issues: - Fix to use proper current bootstrap 3 (this version uses beta) - Fix so layout on mobile looks good (currently bootstrap design looks ok only on pc) - Change logo to attached file Feature changes: - Make image selection show attached al... ...this week (calendar week 12). 1) Someone is sending spam via this form: [Logga in för att visa URL] (probably) so you'd have to activate a captcha for it or do something else to stop the spam. 2) Fix a bug that every time we make a change in Configuration and press "save", the whole shop breaks and is showing the following ...log record number: [some number] 2) Someone is sending spam via this form: [Logga in för att visa URL] (probably) so you'd have to activate a captcha for it or do something else to stop the spam. This needs to be fixed ASAP, even before 1) if possible. 3) Install all security patches. We have the following challenges: - HI I NEED TAMPERMONKEY USER SCRIPT THAT SOLVES CAPTCHA AUTOMATICALLY FOR THIS SITE [Logga in för att visa URL] ...an Andriod and iOS app. The first view will contain a config driven two dropdowns and a Reddit like post message with photo or video upload. Before posting to verify the captcha and google analytics to see traffic. The second view contains viewing on what others posted. The admin page is to delete and to see the activity. I am open to any suggestions ...appear. I need the information of every compagnies when you click on them. Mostly: -Name -Adress -Products -'Chiffre d'affaires' -Responsables (All of them). -Phone Number -Website If you can give me a approximation of delivery time and fees would be great. Can provide a screenshot with a highlight of every companies. There should be around 1400 total create tampermonkey userscript that fills captcha automatically for this site [Logga in för att visa URL] ...need joomla expert to complete these 3 tasks for my website sultan.co.uk. 1: my website is built with joomla 2.5, few years old. its working 100% fine on chrome/firefox but on other browsers like IE, safari, opera, submit button are not working and redirect to homex so fix it. 2: i want my website mobile responsive for every browser and every device Hi, I'm looking to accomplish the listed tasks: Project Objective: To condition the Amrop Panama website [Logga in för att visa URL] , in order to improve SEO positioning. Implementing modules, sections and improvements with the following tasks: Implementation of modules for improvement of SEO (Search Engine Optimization): • Pathauto: Configure paths ...place captcha google. We have a landing page that does not combine the colors and it is necessary to order the whois I would like this higher, the colors of the website to standardize, that the google speed test optimizer passes, that looks good in responsive, the most important thing is that here it works all right. You have to put captcha to the We had a wordpress site built last year, following a website hack and restoring the website there were some changes to the formatting which we are unable to fix. We need someone who is competent in Wordpress to help fixing the formatting ([Logga in för att visa URL]) and making sure the secure version of the site (https) shows up without errors I need a program that will login to this website [Logga in för att visa URL] and return 3 values that are clearly visible as text on the page. If the account does not work, it will go to the next account and try again, it also needs to support proxies, if a captcha is detected it will need to bypass or try a new proxy. ...these pages. On these pages I will again be able to add audio,video,clickable images in different sections. There will be a contact page also that will have a form with captcha. On all pages I should be able to also add text areas (above, below, or on the sides of the audio,video,images) where I can add text with links, bold, etc. I will need to ...security reason, if the main site is blocked by authorities or some hacker, the check page continues to work independently. Some security system needs to be available. Like captcha if more that 2-3 unsuccessful code attempts are made. Additional feature that we would like to incorporate if its possible and easy. We would like to know from which country ...to get quote to build an entire live/chat webcam website and system from scratch like chaturbate(.)com (check it out). No cloning but similar functionality and business model. Can be written in PHP/ASP/JSP or what ever is best recommended. From server configuration to back end and front end website. System must be built using the Open Broadcasting Despite every effort I've made, including captcha and email verification I keep getting so many fake user registrations. Often these users have YEAR at end of username. Is there anyway to stop this for good. I'm sickk of it. ...to current website: Secure E-Signature/Remote Signature Secure Mobile Signature Secure Mobile Document Upload Secure account creation with verification Secure site setup/Encryption Secure Filing Sharing/Transfer Plans & Pricing with Trial setup/discount code Messaging feature Online Payment System/recurring payment setup Secure Captcha Site vistors I have a old wordpress website, in which captcha has locked us out, as we didnt take care of keys. Pls help remove captcha and give me a new admin password We also need an export of DB in human readable format(article view only). Need an resource who can download data from the source by passing captcha note this is an urgent Project and required prompt action ...tickets - Support multiple captcha system - Support shortlinks system - Admin dashboard to monitor ads placement, users' sessions, revenue, payouts, etc - User dashboard to monitor payment, wallets, payouts, etc The faucet website design must be clean, minimal, and strategic ads placement, similar to a digital magazine website, which allows for AdSense hello! we are in process of launching a new website for [Logga in för att visa URL] in the process I have lost my captcha on my contact us form and repairs service form. I need someone to go in and add this security measure. ...MySQL Joomla site fix email for forms, captcha, mobile etc. - once fixed copy files to secondary domain. - make sure all functions work Need an EXPERIENCED web programmer to fix and update existing websites - email forms - change captcha - mobile usability - etc. wordpresss, Joomla, PHP, MySql etc, email forms captcha Please read document and requirements ...create an Autoit or python/JS automation project that will allow me to copy new messages from [Logga in för att visa URL] and paste that message to another website that I have to log into and bypass PICTURE CAPTCHA and check for new messages in this web site and paste messages from that webapage back to a google voice number [Logga in för att visa URL] right now we have
OPCFW_CODE
Out of sort memory when using OrderBy (using Prisma) I am getting the following error ResourceExhausted desc = Out of sort memory, consider increasing server sort buffer size (errno 1038) (sqlstate HY001) When running a query with sorting in it, such as. const documentId = 0x12345 await prismaClient.document.findMany({ where: { id: documentId }, orderBy: { version: "desc"}, take: 1 }) // the Raw SQL select * from Document where id = 0x12345 order by DocumentSnapshot.version desc limit Add a matching Index Add an index within the schema.prisma file for the database table that is throwing the error, such as: model Document { id Bytes @id @db.VarBinary(16) version Int ... @@index(id) @@index([id, version(sort: Desc)]) // Add this index that matches your Prisma query } The index should match your where clause and your order by clause. Doing this should bypass the memory limit and instead rely on the index. For example, if selecting where id is equal to something and ordering by createdDate, then your index should be @@index([id, createdDate]) Increase your sort buffer size Alternatively you can try increasing the sort_buffer_size in your sql engine via calling SET sort_buffer_size = {put a number here}; Using Prisma you could do something like: import { PrismaClient } from "@prisma/client"; const client = new PrismaClient(); // Run the SQL command to set buffer size (must be before calling your query) await client.$executeRaw`SET sort_buffer_size = 1000000;`; // Optionally check the newly set buffer size const bufferSizeCheck = await client.$queryRaw`SELECT @@global.sort_buffer_size;`; console.log(`bufferSizeCheck = `, bufferSizeCheck); // Run your query const someData = await client.documentSnapshot.findMany({ where: { docId, }, orderBy: { version: "desc" } }) At some point you might reach the memory limit again, so this option might not be as solid as the index option. According to PlanetScale support, this issue is know to happen with tables that have fields of type JSON or TEXT. For just the mySQL side (sans Prisma), you can also check out: https://stackoverflow.com/questions/29575835/error-1038-out-of-sort-memory-consider-increasing-sort-buffer-size
STACK_EXCHANGE
I often hear that we should do something “because it’s best practice”. This raises alarm bells with me, as it sounds a lot like “because other people say so”. It’s more productive to avoid the phrase “best practice” and instead be explicit about why the practice is valuable. I was reminded about this today when my colleague David Carboni pointed to article about the flawed popularity of waterfall development, followed by an offline conversation about people blindly following precedent with the claim that “it’s best practice”. Those familiar with the Cynefin framework will be aware that Dave Snowden says best practice is appropriate in the “obvious” domain—an environment where cause and effect are apparent to anyone. If understanding cause and effect requires analysis and expertise then we should be using “good practice” instead, because there are likely to be exceptions and edge cases which mean no practice can always be best. Things become even more challenging when we move into a third domain where cause and effect are not at all clear, even to experts—examples here are the economy and human relationships. Whether or not we choose to use the Cynefin framework we can see that what is claimed to be best practice it isn’t always. Under Cynefin it may be because we’re working in a domain that is not obvious. Even by intuition we can see that what is claimed to be “best practice” might often more realistically be described as “good practice” or even “a useful idea worth exploring”. Sometimes I fear that when someone says “best practice” it’s an unconscious way to short circuit objections. There is an easy route from “this is popular” to “I like it” to “I want us to do it”, and ending with “it will be more readily accepted if I call it best practice”. If those are the dangers, how can we be more constructive? I think it’s far more productive to bypass the talk of best practice and remind ourselves instead why we’re doing the thing. An example would be “we expect our managers to meet face to face with each of their team members at least once a fortnight…”. It would be easy to end that sentence with “…because it’s best practice”, but it would be more useful to say something like “…because it ensures they can respond earlier to any potential problems that people have”. This gives everyone more context about why the practice is being suggested and allows them to use their judgement in how to implement it—for example, where to have the meeting, whether it should be more or less frequent for certain team members or managers, the focus of the conversations, and so on. Stating the reason also gives us the opportunity to talk about whether a supposed best practice really is ideal for this particular situation. It does take effort to think about why we favour a certain practice and then articulate it clearly. But it does provide everyone with better understanding and potentially better outcomes.
OPCFW_CODE
In the last installment of this article I said I would explain how the Weird Solutions DHCP server recognizes which vendor’s vendor-specific options are present, how you can make decisions using these options, and how to define vendor-specific options that will be sent to a device. To recap, vendor-specific options are DHCP options that are not internationally standardized, are specific to a particular vendor, and are all carried inside an internationally standardized option (43 or 125 for DHCPv4, 17 for DHCPv6). Let’s start with option 43, the oldest DHCP vendor-specific option. When attempting to decode the options found inside option 43, the DHCP server must figure out what kind of device it’s communicating with. The server will do this by analyzing the packet for some type of signature that tells it who manufactured the device. Luckily for us, option 60 (Class Identifier) is specifically for this purpose. Before decoding the vendor-specific options, the DHCP server looks for option 60, pulls the text out of that option, then compares that text to vendor-class entries in its database. If a vendor-class entry matches, the DHCP server takes the device identifier from that entry and extracts the IANA enterprise ID. This is the vendor id. Using the vendor id, the server can then decide which options to expect when decoding option 43. The DHCP server requires that option 60 be present in order to be able to decode option 43. By extension, you must define option 60 in any DHCP policy for which you expect to add vendor-specific options. You do not have to define option 60 in order for the server to be able to decode option 43 – only if you want to put vendor-specific options in a policy and have those options transmitted back to a device. When defining option 60 in a policy, you can set any text you want, but it must be something that matches a device manufactured by that vendor. (In other words, the text must match something found in the vendor-class records). DHCPv4 option 125 and DHCPv6 option 17 are also vendor-specific options. These options are referred to as Vendor-Identifying Vendor Specific Options (VI-VSO), which simply means that they carry enough information inside them to encode or decode the options they hold. When defining a VS-VSO, you are presented with two options that can be placed inside: the vendor identifier and the “Options”. You should first define the IANA enterprise ID (vendor id) for the options you wish to encode, after which you can define vendor-specific options inside the “Options” option. One important thing to note is that the Weird Solutions DHCP server does not automatically pick which vendor-specific options should be sent to a device. Instead, since all policies participate in access control, you should ensure that policies with vendor-specific options are only made available to devices that can understand and use those options.
OPCFW_CODE
Babel 6.7 warning: You or one of the Babel plugins you are using are using Flow declarations as bindings On the newest Babel 6.7 (with babel-traverse 6.7.2) the below warning shows up. It's caused by this PR: https://github.com/babel/babel/pull/3414 You or one of the Babel plugins you are using are using Flow declarations as bindings. Support for this will be removed in version 6.8. To find out the caller, grep for this message and change it to a `console.trace()`. at Scope.warnOnFlowBinding (node_modules/babel-traverse/lib/scope/index.js:976:15) at Scope.getOwnBinding (node_modules/babel-traverse/lib/scope/index.js:991:17) at Scope.getBinding (node_modules/babel-traverse/lib/scope/index.js:985:27) at isTypeChecker (node_modules/babel-plugin-typecheck/lib/index.js:52:568) at staticCheckAnnotation (node_modules/babel-plugin-typecheck/lib/index.js:15:8390) at Object.AssignmentExpression (node_modules/babel-plugin-typecheck/lib/index.js:6:7919) at NodePath._call (node_modules/babel-traverse/lib/path/context.js:63:18) at NodePath.call (node_modules/babel-traverse/lib/path/context.js:47:17) at NodePath.visit (node_modules/babel-traverse/lib/path/context.js:93:12) at TraversalContext.visitQueue (node_modules/babel-traverse/lib/context.js:146:16) at TraversalContext.visitSingle (node_modules/babel-traverse/lib/context.js:115:19) at TraversalContext.visit (node_modules/babel-traverse/lib/context.js:178:19) ugh! I feel like this will bite us soon, have you had any chance to look at it? Agree this is a problem, unfortunately not yet, I'm tied up with other projects at the moment. PR would be gratefully accepted. What does it mean? In babel 6.7 this plugin just won't work? Support for this will be removed in version 6.8. Rewire fix it in this way: https://github.com/speedskater/babel-plugin-rewire/commit/80d717a3014cac97e3fa6b8e60f991e30d977b76 Hope could be helpful... I really like this plugin and I don't wanna miss it! I have traced the error message. All of these errors come from this line: https://github.com/codemix/babel-plugin-typecheck/blob/c5554cb0fc7bc09cc3a585ed5883e7ca08090e7f/src/index.js#L2899 function getTypeChecker (id: Identifier|QualifiedTypeIdentifier, scope: Scope): NodePath|false { const binding = scope.getBinding(id.name); if (binding === undefined) { return false; } Unfortunately, knowing that is not enough to solve the issue. I have looked into how https://github.com/speedskater/babel-plugin-rewire/commit/80d717a3014cac97e3fa6b8e60f991e30d977b76 fixed the issue and came up with: function getTypeChecker (id: Identifier|QualifiedTypeIdentifier, scope: Scope): NodePath|false { const binding = (!t.isFlow || (!t.isFlow(id) && !t.isFlow(id.parent))) ? scope.getBinding(id.name) : undefined; However, in this case, t.isFlow(id) && !t.isFlow(id.parent)) is true, therefore the function ends up calling scope.getBinding(id.name) anyway. I know too little about type bindings and this particular issue to solve it. However, it could be a stepping stone for whoever tries after me. Here is the PR that introduced the change. https://phabricator.babeljs.io/rBC9b229f1f089277a928ec6786b5c4791a7d8c1b96 The change has been introduced by @amasad. I guess we will need to wait until he follows up with an explanation. A solution would be to look for the variable that is defined for every type instead of the type itself. @phpnode can this be assigned a priority? @gajus it's high priority, i've started work on v4 which avoids this problem but it's a way from being releasable. I tried hacking around this issue in the existing plugin and ran into problems, but I will try again because it will be faster than waiting for v4. Fixed in 3.9.0
GITHUB_ARCHIVE
Your download link is at the very bottom of the page... always. Processed through Paypal No account required. Donate Bitcoin to this wallet: Donate Ethereum to this wallet: Donate Litecoin to this wallet: | CLCL v2.1.1 CLCL v2.1.1 CLCL is clipboard caching utility. All clipboard formats are supported. Template can be registered. Pop-up menu is displayed by "Alt+C." Menu can be customized. Item is paste automatically. Picture is displayed on a menu. Tool tip is displayed on a menu. The format to leave and the format to save can be set up. The ignored window can be set up. The paste key for every window can be set up. Function is extensible with plug-in. Unicode Freeware Click here to visit the author's website. |2,080||Dec 17, 2019 | Dust Racing 2D v2.1.1 Dust Racing 2D v2.1.1 Dust Racing (Dustrac) is a tile-based, cross-platform 2D racing game written in Qt (C++) and OpenGL. Dust Racing comes with a Qt-based level editor for easy level creation. A separate engine, MiniCore, is used for physics modeling. Features 1-2 human players againts 11 challenging computer players 3 difficulty settings: Easy, Medium, Hard Split-screen two player game (vertical or horizontal) Game modes: Race, Time Trial, Duel 2D graphics with some 3D objects Smooth game play and physics Multiple race tracks Finishing in TOP-6 will unlock the next race track Star ratings based on the best positions on each race track Easy to create new race tracks with the level editor Engine and collision sounds Pit stops Runs windowed or fullscreen Portable source code using CMake as the build system Will be forever completely free Playing Controls The key configuration and game mode can be changed in the settings menu. ESC or Q exits the race and also the current menu. P pauses the game. Races In the race modes there are always 12 cars. By finishing in TOP-6 a new track will be unlocked. The record times and best positions are stored separately for each lap count. Pit stops Your tires will wear out as the race progresses. This causes more and more sliding. Fortunately there's a pit (the yellow rectangle). By stopping on the pit your tires will be repaired. Custom track files Dust Racing searches for race tracks in ~/DustRacingTracks/ where you can place your own race tracks. Changelog: Version 2.1.1 02-22-21 New features: Add Turkish translations Bug fixes: Fix GitHub Issue #117: Spelling ... |4,974||Feb 22, 2021 Dust Racing 2D | hostsmgr v2.1.1 hostsmgr v2.1.1 Console tool for sysadmins and other people who need to autoupdate "hosts" file. Command line -path - output file location (def: ".\hosts") -ip - ip address to be set as resolver (def: "0.0.0.0") -os - new line format; "win", "linux" or "mac" (def: "win") -nobackup - do not create backup for output file (opt.) -noresolve - do not set resolver, just generate hosts list (opt.) -nocache - do not use cache files, load directly from internet (opt.) GPG Signature Binaries have GPG signature hostsmgr.sig in application folder. Public key: pubkey.asc (ha.pool.sks-keyservers.net) Key ID: 0x5635B5FD Fingerprint: D985 2361 1524 AB29 BE73 30AC 2881 20A7 5635 B5FD v2.1.1 (8 June 2021) fixed excluded hosts hashing updated project sdk Other assets: hostsmgr-2.1.1-pdb.zip hostsmgr-2.1.1.sha256 Click here to visit the author's website. |2,913||Jul 16, 2021 | Input Director v2.1.1 Input Director v2.1.1 Enables the control of multiple Windows systems using the keyboard/mouse attached to one computer. Switch control between systems either by hotkey or by moving the cursor to the screen edge on one computer for it to appear on the next one Input Director supports a shared clipboard - copy on one computer, switch control, and paste . Input Director is compatible with: Windows 11, Windows 10, Windows 8/8.1 and Windows 7. The systems must be networked. Features Easy to Use Easy to follow installation and usage guides - setup only takes a few minutes Input Director's flexible monitor layout system makes it easy to accurately reflect a monitor's physical location and size: Multi-monitor support Shared Clipboard - copy and paste between computers (including files!) Compatible with Windows 11, Windows 10, Windows 8/8.1 and Windows 7 Only Input Director ensures the cursor remains visible and correct if the mouse is disconnected on a Windows 10 or Window 11 system Transitioning control to another computer is as simple as moving your cursor off the screen on one computer for it to jump to the other Able to control a computer without needing to login to it first Supports Windows fast user switching Compatible with Windows User Account Control pop-ups Manage all your computers at once Simultaneously lock all computers Import/Export Input Director configuration and apply configuration updates from the command line Synchronise screensavers across your computers Synchronise shutdown of your system (or individually configure whether a computer goes to standby, hibernate or shuts down) Wake systems over the network Security Encrypt network data between Input Director controlled computers Lock down the Input Director configuration so that only System Administrators may make changes Systems can limit which systems can control them by host name, ip address or network subnet Transition Features Ripples surround the cursor for a few seconds after transitioning to help the eye follow the cursor from one computer to another: Transitioning using the mouse can ... |1,939||Jan 08, 2022 |Showing rows 1 to 4 of 4||Showing Page 1 of 1||1| OlderGeeks.com Copyright (c) 2022
OPCFW_CODE
Looking for chat project win application Freelancers or Jobs? Need help with chat project win application? Hire a freelancer today! Do you specialise in chat project win application? Use your chat project win application skills and start making money online today! Freelancer is the largest marketplace for jobs in the world. There are currently 17,764 jobs waiting for you to start work on! I have an application installed on an IIS server that also includes a SQL database. I need some configuration...other stuff to do. We will chat via MSN messenger to accomplish this project. Windows IIS. The application is .asp ...60% OF THE COMPENSATION WILL BE GIVEN AT THE PROJECT COMPLETION, AND 40% IN CASE THE COMPLETION IS DONE... THIS MEANS THAT, EVEN IF YOU COMPLETE THE PROJECT, IF YOU DO NOT RESPECT THE DELIVERY TIME YOU PROVIDED...PROVIDED, YOU WILL GET ONLY 60% OF TOTAL PROJECT FEES. We need a new icon for winapplication. Your can download the application here <http://18.104.22.168/website/download...4/website/download.asp>? Also we need a new design for the application. Rent A Coder requirements...complete details. Should a dispute arise and this project go into arbitration "as is", the contract's va... Need a simple chat client/server without registration login or user name. For now this chat will be used...provider who will win this project will have possibility to continue to develop this project to next steps ...pay by Escrow 100% upfromt , release just when project completed and tested. - we prefer to deal with...with one company for all projects . - all project must be finish within 30 days , max 45 days. - we...all projects ASAP and want each team work on one project to finish all at frame time . - Our budget is System OS: Windows Application Language: C# Application Database :...scanner, receipt printer The application will be deployed in a kiosk environment. The...have a receipt printer. We need a spin and winapplication for the kiosk. Functions & requirements: In this project we would like to find out whether the same executable is running on some other computer...computer which is connected to a network. If the application finds a replica of it running then they must...communicate with each other. If the application does not find the same application running on the network then Looking to get a chatting application built from scratch on the iOS platform Previous similar...confident, how well can you replicate and make an application that is more advanced(in features, functions...design, etc.) than "Skout" and/or "Kakao Talk"(Both chat applications are free on iOS and android) from scratch ...will not be considered!!! The objective of this project is to build an industry specific chatting systems...A client application where the server will broadcasts messages to; 4. Another application (or service)...functionality in order to be able to win the whole project. This demo application itself is useless, it will only i have a project running on vworker right now, so you can be sure that this will be made... i want somewhat of a copy of this mac application for windows (i need a windowss exe): <http://www...a rough estimation on time and money for this project, as well as which language you want to code in Looking for help to create a ChatApplication Website. Objective: Users should be able to create their...their own groups, invite friends and chat[voice/text] -This website is intended to help users interact...relayed on TV. Prefer to use existing open source chat applications/framework. Please apply only if you We need Chat a Application that should work with Skype, Yahoo, AIM, Gtalk, ICQ For following phones:...only 2 days for this project, if you have already worked on same type of application and if you have code ...mobile application in android. I'd like "wechat" an application that will allow users to chat, exchange...functionality of the application wechat except video chat. The person who wins this project will have a 90%
OPCFW_CODE
Presenter: Darren Brunett Firstly, you might ask where Part I was – as part II came first on the list of options in the schedule. Joking apart this was a good session – not least because my website was mentioned in the PowerPoint slides! I met Darren at the London User Group briefly where have gave presentations on VMFS and other troubleshooting issues. He’s an easy going guy with a great self-depreciating presentation style. Darren ran through some of the top support issues he deals with in his role as a Senior Technical Support Engineer at VMware. These sessions always interest me – because frequently students bring the very same problem to me either during courses, or afterwards informally via email. Darren covered a lot of ground – but a couple of points he made me reach for my pen and pad to scribble them down. Firstly, he covered recovering lost VMFS partitions caused people having a “Homer Simpson” moment. Generally, if some removes a VMFS volumes, and then re-partitions that LUN for another purposes your chances of recovery are slim. If however, someone removes a VMFS, and then has left the LUN untouched there is a good chance of recovering the VMFS. Very simply its possible to put the partition table information back in place using fdisk. You do need to use esxcfg-vmhbadevs to find out the Linux /dev/sdN value. But after that it is a case of putting the primary partition back on the disks. Expert mode is used to make sure the disk is offset for the disk alignment automatically implemented by the Vi Client. Anyway, I was very much taken by the process – so I plan to be Homer Simpson soon and give Darren’s steps a road test. Darren went on to mention some troubleshooting on the Service Console networking side of things – familiar territory for me, which was when RTFM was given a name check. Later Darren went on to outline some issues with snapshots. Firstly, he explained how the ability to extend virtual disk sizes with vmkfstools -X is incompatible with the snapshot feature – and currently corrupts the snapshot feature. He showed how you can find out the original size of the vmdk by viewing the metadata.vmdk of one of the snapshot delta files. This information together with vmkfstools could be used to reduce the vmdk back to its original size. After that the snapshot can be safely committed to the vmdk. He also mentioned how the snapshot management file (.vmsd) gets destroyed when a snapshot is allowed to fill a LUN. Darren pointed out a method of renaming the damaged vmsd, and then deleting the last snapshot delta file – to free up space to add another snapshot. This builds a vmsd file to a useable state. You can then edit the vmx file to tell the virtual machine to us the last good snapshot. Clearly, some data loss he is inevitable (because of the deletion of the last snapshot in the parent/child series) but it does return the VM to useable state. Lastly, Darren outlined some interesting networking problems – such as when two NICs in NIC Team are plugged into different V/LANs. He showed how you can use esxcfg-info | grep -i -B hint to display useful IP data that can tell you if NICs are on the same or different subnets. Additionally, he pointed out how some Spanning Tree Protocol systems cause unwanted “split-brain” situations in HA. Some STP data takes 50 seconds or more to be propagated around the network. This causes an ESX host to believe it has had a network failure as HA checks for network connectivity more frequently. The only work around appears to be modifying STP settings to make its data proliferated at faster rate.
OPCFW_CODE
|Re: Create database -> rats' nest > Why it shouldn't be a privilege?Because that's necessity :)) (sorry, could not resist). > What's wrong with creating a backup operators group/role instead ofNothing wrong, really correct approach, but wrong conclusion that this > giving these persons my dbowner password? also needs new priviledge. > I'm not 100% sure in that either ;-) But we need a number ofSee below. > concurrent suggestions to understand our needs better. > Honestly, I don't like the Borland's way. And they don't supportINSERT INTO SYS$BACKUPS(SYS$DESTINATION) > INSERT/DELETE against monitoring tables, only UPDATE of the > RDB$STATE column is allowed. This is not obvious behaviour at all. > And this doesn't address the backup/restore facility in any way. Entry is automatically removed as soon as backup is completed. But that is an ill approach. What in fact we have are 4 DML operations and a number of database objects on which one can execute those operations. So we have 4xN possible combinations (N - number of database objects). GRANT is quite Backup, statistics gathering, etc. work only with one object - database. One can say that then we need priviledges for all operations that can be executed on this object, but we define 2xM entites, where in fact we could happily live with only 1xM. Also I am afraid that this will make system monolythic. What in fact we need, is a priviledge to perform that operation. And we already have this priviledge - GRANT EXECUTE PROCEDURE ON ... So what is left, is to export all those database level operations as procedures and then simply grant EXECUTE PROCEDURE on the corresponding object to corresponding roles with GRANT OPTION. Also I think that it would be wise to consider this as part of External SP concept. In few words, there is a module (for example, backup.dll) that is loaded by the engine at startup and is initialized. Module checks the procedures defined for the database, if necessary defines its own procedures with entry points pointing to its own exports. Something like: CREATE ROLE BACKUP; CREATE PROCEDURE BACKUP_DATABASE( ) RETURNS ( MODULE NAME 'backup.dll' ENTRY POINT 'isc_backup_database'; GRANT EXECUTE PROCEDURE BACKUP_DATABASE TO BACKUP WITH GRANT OPTION; Now backup supervisor (I do not use word "manager" not to confuse person with backup.dll) can do GRANT EXECUTE PROCEDURE BACKUP_DATABASE TO SAM; and Sam can do EXECUTE PROCEDURE BACKUP_DATABASE('/home/sam/backup/my.fbk'); We (Firebird project) can provide simple backup managers with almost no parameters, but people can create their own backup managers that can, for example, encrypt the database, run backups in timely fashion (remember, backup.dll is a plugin, it runs as long as server runs) and
OPCFW_CODE
Will not open any Yahoo site: Disabled Accelarator, complete reboot, cleared all temp files and cookies. This happened a few months ago and a supervisor replaced the modem and then it worked fine. Now it's doing the same. It will just time out while trying to load. I have no response from anyone? Much like when we are told your support people will call back? This is an ongoing problem that you folks have responded to with no results. Can't get into Yahoo for anything. Had this problem about two months ago and a supervisor sent a new modem and then Yahoo was easily accessed. Now it's doing the same again and no one wants to respond?????????????? It can take upto 48 hours for a moderator to initially respond, barring weekends and holidays. There's no need to start a second thread about the same issue. I don't know for sure that this will help, but you could try changing your adapter's DNS server settings from Hughesnet's to Google's. It couldn't hurt to try. Google's IPv4 server settings: Google's IPv6 server settings: I see your initial post shows 6 hours ago, which would have been around 3 PM eastern time. Moderators are only here til around 5 PM eastern. So being posted so late in the day they were probably dealing with other things and didn't get to your post yet... like @C0RR0SIVE said it could be up to 48 hours for them to respond. Sorry about the delay! I see that you've already gotten in contact with the corporate representative, William, regarding this issue and a callback was scheduled for yesterday after 5PM. Did this call occur? again and no one wants to respond?????????????? I'm not a moderator and I'll respond. Have you tried doing a traceroute to see if maybe it's not HughesNet causing the problem? Nine times out of ten when you have a problem going to a specific domain it's a busted routing table. Sometimes this is from going to sketchy web sites. Sometimes it's from HughesNet's provider, one hop away from the gateway. A traceroute would let you know if it's local or not. If it is local, simply rebooting your modem, computer, and associated routers would solve the problem. Why all three? They all cache the routing tables. Amanda, Yes Amanda the call was made, but the problem is I'm at work and can't be in front of my computer. I don't get home until after 5:00. Apparently I'm suppose take time off work? Is there any supervisors that work after 5:00? William, the supervisor I worked with in the past really tries hard to get things working right. The last time this happened he sent a new modem and everything worked just fine until now. My lap top will go into Yahoo sites via other internet carriers such as my school, local library, and neighbor's internet which happens to be Hughes.
OPCFW_CODE
In this article What is Windows Sandbox? Windows Sandbox is a sandboxing environment built into Microsoft Windows version 1903 and higher, which lets you safely run your applications in isolated, lightweight desktop environments. When you install software inside Windows Sandbox, Windows runs applications in an isolated virtual machine, preventing threats from impacting the rest of the environment. This ensures software components run separately from the host, and any software installed on the host is not available to the sandbox environment. Any software needed in the sandbox should be directly installed in the environment. Because the sandbox is temporary, once it is closed all software, files, and the state are deleted. When you open the application, a new sandbox instance is created. Here are key features of Windows Sandbox: - Secure—Windows Sandbox leverages the Hyper-V hypervisor to run a separate operating system kernel, isolating the sandboxed environment from the physical host. - Windows native—Windows Sandbox components are included in Windows 10 Pro and Enterprise. - Clean environment—Windows Sandbox initiates a clean installation for each sandboxed application - Disposable—the device is wiped clean after a user closes the application. - Efficient—Windows Sandbox uses advanced capabilities, including an integrated kernel scheduler, virtual graphics processing unit (GPU), and smart memory management. - No file system duplication—files in the sandbox are pointers to the same file system, so the storage overhead of the sandbox is minimal. How Windows Sandbox Works Windows Sandbox leverages several technologies when creating isolated environments: A dynamic base image—Windows Sandbox uses virtual machines (VMs) to generate a sandbox. A VM requires an operating system (OS) to work. To consistently create new and clean OS-installed VMs, Windows Sandbox generates a dynamic base image, and each sandbox is a clean copy of the original host operating system, with a clean registry and file system, just like a fresh OS installation. - Snapshots—makes the boot process faster than booting up a full operating system.. Windows Sandbox boots an individual sandbox only once, then uses snapshots to save memory and device state for subsequent use. This helps the environment to restore memory without initiating another boot process. - Kernel-based memory management—enables the host to reclaim memory from Windows Sandbox, as needed. A direct memory map that lets the sandbox use the same memory pages accessed by the host. - Integrated scheduler—the host OS treats the visual processors of the sandbox like process threads. This means that the host OS manages Windows Sandbox like a process and not like a traditional VM. The integrate scheduler ensures that the base OS prioritizes the operations of the host over other processes. This makes resource allocation more efficient compared to a traditional VM, where the host doesn’t have visibility to the guest. - Graphics—Windows Sandbox uses hardware-accelerated rendering, for GPUs with WDDM version 2.6 and higher, to improve the performance and responsiveness of applications. In addition, Sandbox dynamically allocates graphic resources across the host and environments. Windows Sandbox Architecture Dynamically Generated Image Instead of using separate copies of Windows when booting the sandbox, Windows Sandbox dynamically generates pointers to different operating system images. The majority of OS files are immutable. This means that files can be shared with the sandbox environment. However, several OS files cannot be shared, and in this case the sandbox image creates clean copies of these files. Together—the shared immutable files and the copies of the mutable files—create a complete image, used to boot a sandbox environment. Before the installation of the environment, the image is packaged and stored as a compressed file. Once installed, the image takes up approximately 500 MB of disk space. VMs usually use static allocation to apportion host memory. This means that traditional VMs are limited—once resource needs change, there are few mechanisms that enable you to scale. A Windows Sandbox, on the other hand, offers more flexibility. Windows Sandbox leverages containers to enable collaboration with the host, which can then dynamically determine how to allocate host resources. The goal is to supply hosts with resources when it is under memory pressure. In this case, the host can reclaim memory from a container. A “direct map” technology enables the image and the host to share the same physical memory pages. This technology ensures that the image and host use less memory without compromising host secrets. Integrated Kernel Scheduler Traditionally, the Microsoft hypervisor controls the scheduling of any virtual processor running in the VM. Windows Sandbox leverages an integrated scheduler that lets the host scheduler specify when the sandbox environment gets central processing unit (CPU) cycles. This process lets the Sandbox schedule virtual processors like host threads, and prioritize the most important jobs regardless of where they are performed. WDDM GPU Virtualization To ensure optimal performance and responsiveness, Windows Sandbox leverages hardware-accelerated rendering. This is especially useful for graphic-intensive workloads. Sandbox uses DirectX and Windows Display Driver Model (WDDM), which lets sandbox-based programs compete for GPU resources with any application running on the host. To use this feature you need a GPU, and graphics drivers supporting WDDM 2.5+. Otherwise, applications will be rendered based on the CPU using Windows Advanced Rasterization Platform (WARP), without leveraging GPU resources. Windows Sandbox is always aware of the battery state of the host. This enables Sandbox to continuously optimize power consumption. Battery pass-through processes are critical for laptops, which heavily rely on battery life. Windows Sandbox Configuration Windows Sandbox provides simple configuration files that let you customize ten parameters per sandbox environment. This feature supports Windows 10 build 18342 or newer versions. A Windows Sandbox configuration file can only be formatted as XML. The .wsb file extension associates configuration files with Sandbox. Here are the ten customizations you can achieve with a Windows Sandbox configuration file: - Virtualized GPU (vGPU)—lets you enable or disable the vGPU. Note that when you disable vGPU, the sandbox starts using WARP. - Networking—lets you enable or disable the sandbox’s network access. - Mapped folders—lets you share host folders with write or read permissions. However, do this with caution because exposing host directories might let malware perform unauthorized actions on the data and applications. - Logon command—executed when Sandbox starts. - Audio input—lets you share the microphone input of the host with the sandbox. - Video input—lets you share the webcam input of the host with the sandbox. - Protected client—adds extended security measures on the remote desktop protocol (RDP) session. - Printer redirection—lets you share host printers with the sandbox. - Clipboard redirection—lets you share the host clipboard with a sandbox environment. This configuration enables you to paste text and files between host and sandbox. - Memory in MB—lets you define the amount of required memory per sandbox, in megabytes.
OPCFW_CODE
Python packages (numpy/pandas/etc) in Visual Studio 2017 on Windows I've just installed Visual Studio Community with the workloads for Python and Data Science. I create a new Regression project from the Python\Machine Learning template. The first few lines are: from pandas import read_table import numpy as np import matplotlib.pyplot as plt First I get the errors: No module named xxx or Missing required dependencies [xxx], for pandas or numpy, or scikitlearn or scipy. I would have expected these to be installed as part of the Visual Studio workloads, and indeed they seem to be in the Anaconda3\Lib\sitpackages folder, if that's where they should be. But I tried installing them anyway from the Python Environments window in VS. If I'm lucky, then I get past the above error to this one: Importing the multiarray numpy extension module failed.. Anyone got any pointers for setting this up? Possible duplicate of Using NumPy in Visual Studio This is how I got it to work: Right click on "Python Environments" on the solution explorer window. Select Add/Remove python environments and then pick an environment that has the right packages selected or add packages as needed. Anaconda needs to add for pandas,numpy, and pyplot. Add Anaconda on right click on Python Environment and add the Anaconda. I just went through this pain the other day, on 64-bit Windows 7 with VS 2017 Community. To get the regression example working I had to upgrade Python to version 3.6.1, as the pip-installed version of numpy (1.13.1) doesn't work with 3.6.0. In short, I downloaded and ran the Windows 64-bit installer for Python 3.6.1 direct from python.org, then (as you described above) from the VS Python Environments window installed matplotlib (2.0.2) numpy (1.13.1) and pandas (0.20.3). After that, all the imports worked. (NB it takes a while for the VS intellisense feature to get up to speed with the imports.) On my machine pandas, numpy and matplotlib sit in C:\Program Files\Python36\Lib\site-packages Hope this may help. Thanks. I made some progress with 3.6.1, but then I needed to install scipy, and when I tried installing that (via VS), I got the error no lapack/blas resources found. I've tried pip installing the wheel (http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy), but still no luck. The journey continues... This seems to have got it working: I installed the latest version of Anaconda (after uninstalling the version that came with the Python VS workload). Then I installed the numpy, scipy and scikit wheels from this link. Importantly, I installed them using the Anaconda console (Start -> Anaconda Prompt [Run as Administrator]), as I had another system installation of Python, that I had been installing the wheels/packages into previously. That seemed to do the trick, after restarting Visual Studio. The latest version of Anaconda is not necessary, just make sure you're installing the wheels into the right place.
STACK_EXCHANGE
Things change. You may need to move an event to another date or another sub-calendar. There are multiple ways to do so easily on Teamup. Move events to another date and time Option 1: Drag and drop in a grid view In any of the grid views (Day, Multi-Day, Scheduler, Week, Multi-Week, Month, Timeline, Year), simply drag the event you want to move and drop it to the date you want. If possible, choose a view with the date range covering both the old date/time and the new date/time of the event you want to move. For example, use the Day or Scheduler or Week view to move an event from 9AM Monday to 3PM Wednesday. Or use the year view if you need to move an event from February to November. Option 2: Right-click to open the context menu Right-click an event and select Move to from the context menu, then select the date you want to move the event to. Option 3: Edit the event Click the event to open it in the event editor where you can change the date and time as needed. For an hour-specific event, sometimes it’s quick to just start typing the time then select from the smart drop-down list of options. Move events to another sub-calendar Option 1: Drag and drop in scheduler view Scheduler view displays events on one day with one column for each visible sub-calendar. - Drag an event from one column to another: dragging an event from Column A to Column C would change the sub-calendar assigned to the event from sub-calendar A to sub-calendar C. While you can’t change the date, you can change the time by dropping it to a different time slot. Option 2: Drag and drop in timeline view Timeline view displays events on a date range up to 30 weeks with one row for each visible sub-calendar. - Drag an event from one row to another: dragging an event from Row A to Row C would change the sub-calendar assigned to the event from sub-calendar A to sub-calendar C. You can move an event to any of the visible date and time. Make sure to adjust the date range and zoom level as well as scroll vertically or horizontally to cover your start and end date/time. Scroll vertically or toggle sub-calendars to make sure the sub-calendars you need are visible. Option 3: Edit the event Open the event and select a different sub-calendar or multiple sub-calendars. If there are many sub-calendars on the drop-down list, simply start typing a word you know that is in the name of the sub-calendar. The smart filtering function shortens the list as you type so you can select the one you need quickly. - How to move or copy events - How to duplicate events - How to use fewer keystrokes when entering calendar data
OPCFW_CODE
Unable to start the docker image in google cloud run I just used the gotenberg/gotenberg:8-cloudrun image and started it: _____ __ __ / ___/__ / /____ ___ / / ___ _______ _ / (_ / _ \/ __/ -_) _ \/ _ \/ -_) __/ _ '/ \___/\___/\__/\__/_//_/_.__/\__/_/ \_, / /___/ A Docker-powered stateless API for PDF files. Version: 8.9.2 ------------------------------------------------------- [SYSTEM] modules: api chromium exiftool libreoffice libreoffice-api libreoffice-pdfengine logging pdfcpu pdfengines pdftk prometheus qpdf webhook [SYSTEM] chromium: Chromium ready to start [SYSTEM] libreoffice-api: LibreOffice ready to start [SYSTEM] prometheus: collecting metrics [SYSTEM] pdfengines: exiftool libreoffice-pdfengine pdfcpu pdftk qpdf [SYSTEM] api: server listening on port 3000 Default STARTUP TCP probe failed 1 time consecutively for container "gotenberg-1" on port 8080. The instance was not started. Listening for requests on the correct port (services): By default, requests are sent to 8080, but you can configure Cloud Run to send requests to the port of your choice. Cloud Run injects the PORT environment variable into the ingress container. I would recommend to listen to the PORT environment variable by default. You may setup this behavior with the flag --api-port-from-env=PORT. But you're right, it's not clear in the documentation and I may setup default flags. Thanks for the update. Using following command i was able to boot up the instance: gcloud run deploy gotenberg-prod --image=gotenberg/gotenberg:8-cloudrun --args=gotenberg --args="--api-port-from-env=PORT" If --args=gotenberg is not provided i will run into [FATAL tini (2)] exec --api-port-from-env=PORT failed: No such file or directory @gulien If you update the documentation I would suggest to use --memory=1Gi for a better experience, because the default is 512Mi, which caused some issues. Here the metric of our service running in production: I was running with default first, then I ran into memory issues. I increased it to 1Gi since then the memory usage is 55% on average, sometimes peaks to 60%. And thank you for this smooth plug and play experience. We replaced a chrome headless using CDP with gotenberg, and it runs so smooth Thanks for the heads up @Fank! The 512 Mo requirement seems to really depend on the type of infrastructure. I'll add it to the documentation 👍 And thank you for this smooth plug and play experience. We replaced a chrome headless using CDP with gotenberg, and it runs so smooth Glad you appreciate, and thanks for the feedback! Most often than not, I have no idea how users are experiencing with Gotenberg 😄 @Fank I have improved the documentation regarding Cloud Run based on your feedbacks 😉 I think its best to let the users add the flags themselves instead of putting default flags in the image. It's less magic, but it eases the process of adding more flags if needed (e.g., not overriding default ones with the risk that their containers do not work anymore).
GITHUB_ARCHIVE
What is the sorting order of the return value by with_fileglob? I have read the Ansible document but I can't find out the sorting order of the result of the with_fileglob. For example: I have a task to print out a list of items that under the folder chapters. - name: fileglob test debug: msg: - "Hello {{ item }}" with_fileglob: - ./books/chapters/* My test data: $ ls books/chapters/ -rw-r--r-- Jan 25 20:37 0.json -rw-r--r-- Jan 25 20:37 1.json -rw-r--r-- Jan 25 20:37 2.json -rw-r--r-- Jan 25 20:37 A.json -rw-r--r-- Jan 25 20:38 B.json -rw-r--r-- Jan 25 20:38 Z.json -rw-r--r-- Jan 25 20:38 a.json -rw-r--r-- Jan 25 20:38 b.json -rw-r--r-- Jan 25 20:37 chapter.json -rw-r--r-- Jan 24 16:24 chapter1.json -rw-r--r-- Jan 24 16:24 chapter2.json -rw-r--r-- Jan 25 20:38 z.json I got the same order every time I run my playbook. Do you have any idea how this order is made up? TASK [test : fileglob test] ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/B.json) => { "msg": ["Hello /ansible/./books/chapters/B.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/0.json) => { "msg": ["Hello /ansible/./books/chapters/0.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/chapter2.json) => { "msg": ["Hello /ansible/./books/chapters/chapter2.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/chapter1.json) => { "msg": ["Hello /ansible/./books/chapters/chapter1.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/1.json) => { "msg": ["Hello /ansible/./books/chapters/1.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/b.json) => { "msg": ["Hello /ansible/./books/chapters/b.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/Z.json) => { "msg": ["Hello /ansible/./books/chapters/Z.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/A.json) => { "msg": ["Hello /ansible/./books/chapters/A.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/z.json) => { "msg": ["Hello /ansible/./books/chapters/z.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/2.json) => { "msg": ["Hello /ansible/./books/chapters/2.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/a.json) => { "msg": ["Hello /ansible/./books/chapters/a.json"] } ok: [<IP_ADDRESS>] => (item=/ansible/./books/chapters/chapter.json) => { "msg": ["Hello /ansible/./books/chapters/chapter.json"] } Since Ansible is a Python application, this is probably related: https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered So, as pointed by the accepted answer, you could compare Ansible's sorting to the result of ls -U In case a sorted file list is required, get sorted list of folders with Ansible. Thanks a lot for your comments, it helps a lots. According to the man page of ls, option -U means => do not sort; list entries in directory order. So it depends on the file system implementation to decide how to list entries in a directory.
STACK_EXCHANGE
Initially, I was skeptical about the Kotlin Multiplatform Mobile talk because it sounded like I would hear only about the basics. But the reality turned out to be a whole different story. This blog post is by Łukasz Kasprzyk, Android Developer from Tooploox, Kotlin enthusiast, and die-hard football fan. The 18th of May, 2022, will remain in my memory for a long time. Why is that, you ask? There are two reasons: the first and main reason is that on that day I went with my Android Teammates for our first offline conference – Kotlin Dev Day in Amsterdam. The second reason is that this conference took place at Ajax Amsterdam – Stadium Johan Cruijff Arena – for me, as a huge football fan, it was an awesome experience. After we landed in Amsterdam we went from the Airport to our hotel. And, as we saw, it was around 100 meters from the stadium, a very nice spot. The Conference doors opened at 8 am and the first talk was scheduled for 8:30, so we arrived punctually. We needed to go to the 4th floor via escalator, on each floor we could see some crucial moments of Ajax’s history on many banners: On the 4th floor, we went to queue for our badges and to get some details about the conference rooms, where they were, etc. When I realized that this conference was taking place in the Stadium, I was expecting that we would be inside the Stadium building and in some inner conference rooms, but when we started exploring I saw that the Pitch was open and we could see it, but when I entered the tribune I was stunned. The whole Pitch was open for us to see, but the best part was that The Pitch was an actual stage (one of three) called “The Pitch” and we were sitting on leather seats. In the main hall, there were booths for companies who were sponsoring the conference. “What should I use” instead of “why should I use it” and Kotlin Multiplatform Mobile (KMM) The intro and first keynote talk were on the Amsterdam Stage – “The Silver Bullet Syndrome Part 2 – Complexity Strikes Back!” By Hadi Hariri. Mister Hadi was very charismatic and made a lot of situational jokes during his talk. The talk was, as he described, the second part of two but I didn’t feel it was necessary to have seen the first to get the benefits of the second. He started with Web Development history through Gradle and KMM, starting from the development itself through build, deployment, and security. The main question and thesis of his talk were: What should I use instead of why should I use it. The main takeaway was: Complexity comes at a cost. The next talk took place on the third stage, called the Vienna Stage, and was about the topic I personally came to this conference to discuss: KMM. Piotr Prus – our countryman – hosted a talk called “Meet Kotlin Multiplatform Mobile (further referred to as KMM.)” Initially, I was skeptical about this talk because it sounded like I would hear only about the basics. But Piotr held a talk that I wasn’t expecting: about how they adopted KMM in their company and made it production available. After the talk, I approached him and we had a very interesting chat about our experience with KMM. Unfortunately, his talk was only 25 min long and he had materials for over 40 minutes but we exchanged our opinions and we told ourselves we’d catch each other later. Composing an API with Kotlin As the schedule was tight and I was already late I needed to run to the next and the best stage – The Pitch, where Marton Braun started giving the talk “Composing an API with Kotlin.” He started from a Compose introduction, going through Extension Functions, Modifiers, Naming Conventions for VALs (Read-only local variable) in Compose, Scopes inside Compose, Inline classes, and finally Coroutines in Compose. It was nice to go through Compose from an inside perspective. After this talk, we had a coffee break where organizers provided some hot beverages and sandwiches. With new energy, I went back on the Pitch to hear “Kotcha!” by Jeroen Rosenberg. This talk was actually something different, as after the introduction Jeroen held a Quiz. It was about Kotlin and some very catchy cases of usage. I took part and made the top 10, which pleased me. For first place, there was a price of a hoodie for speakers with the logo of the conference. I had some spare time before my next planned session began, so I stayed for a while on the Pitch to listen to “Hexagonal architecture with Kotlin,” by Jan Verhoeckx. I was there only for 10 minutes but the talk looked interesting, as Jan was talking about information leaking between layers and how to avoid it. Concurrency made easy with Kotlin Coroutines From there I went to hear more about Coroutines in “Concurrency made easy with Kotlin Coroutines” with Ricardo Lippolis. He started with what Coroutines are in general, and later went through how to suspend working functions. He went through Context and how to propagate errors, continuing with Dispatchers and Global scope usage, and finished off with runBlocking and how to use it. After this talk, there was a lunch break, where we could grab some hot meals and spend some time exploring the sponsors’ booths. Shoulders of Giants – Languages Kotlin learned from After the break we went back to the best place ever – yes the Pitch – where Andrey Breslav talked about the “Shoulders of Giant – Languages Kotlin learned from.” It was actually awesome to hear about how the language I’m using right now was designed directly by the creator! I heard there how the whole process looked, which languages were the inspiration for Kotlin with examples, and why given decisions were made (one of the best talks there). Later on, I moved on to the Vienna stage for the next two talks about the main topic of concern: KMM. The first one was “Introducing Kotlin Multiplatform in an existing project” by Marco Gomiero – where he explained how to make a KMM library by yourself, distribute it to Maven on Android or XFramework on iOS, and use it in an existing project. The second was called “Multiplatform success stories (and fuck ups)” with Liliia Abdulina (Lead QA on the KMM Team) – Liliia went through cases of 5 companies who used KMM from the beginning and showed all the pros and cons they provided, this was very useful for me personally as I’m introducing KMM in our company and what better way to learn than by the mistakes of others. After this, there was the third and final afternoon break with some snacks. Then, on the Vienna stage, Khaleel Freeman held a talk on “Adopting Jetpack Compose” all about the challenges he and his team faced when introducing Compose with the assumption of launching fast and iterating quickly. Plugin and Play Two final talks took place on The Pitch. The first one was “Plugin and play” with Simone de Gijt, in which she covered some of the libraries which could help us develop, like Kover, Ktlint, and Detekt – with live demos. An interesting takeaway for me was to use the Android Studio plugin with Ktlint for live checks. The final closing talk was from James Ward (Kotlin Product Manager at Google) with “James’ Top 5 Kotlin Things to Get Excited About in 2022.” James also made a small online live questionnaire about us – the participants – on what technologies we use, etc. He presented his top 5 for 2022: - Inner Loop Dev: Server-side - Incremental Compiler - Inner Dev Loop: Live Preview - Kotlin Multiplatform Unfortunately, there was no hot news or spoilers but he gave us some sneak peeks. The whole conference wrapped up with some networking. Kudos to the organizers for how it all was arranged, the place, the talks, the speakers, and everything around. If there will be the next edition I will definitely take part! Maybe next time, I will have something to tell about KMM.
OPCFW_CODE