text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Today, we bring you a new report on the Maui Project progress. software from the KDE Community developed by the Nitrux team. This post contains some code snippets to give you an idea of how to use MauiKit. For more detailed documentation, get in touch with us or subscribe to the news feed to keep up to date with the upcoming tutorial. We are present on Twitter and Mastodon: This year we will have two talks on QtCon Brasil and Akademy discussing and sharing the experiences with the Maui Project; the discussions will be virtual, so anyone interested in knowing a bit more about this project can join us and ask any questions. QtCon Brasil – Convergence Akademy 2020 – Maui Project Updates Maui Release 1.1.0 and 1.1.1 Since the last weekly posts reporting on the stable releases of the Maui Project, the Nitrux team has been organizing, planning, and working on the next stable release of the MauiKit framework and the set of applications. This post covers some of the updates and bug fixes coming to the 1.2.0 stable release due in October. The Maui team is aiming for a six months release cycle to bring the new features, updates, and fixes more quickly. MauiKit and Maui apps 1.1.1 For getting the latest stable release packages to check the official Maui Project webpage RoadMap to 1.2 and Invent The next stable release is planned to be out by October, the new features and issues are being organized and tracked under Invent, so if you want to report an issue or make a feature request, you can follow the links to open a ticket. The Maui Project is now organized into the /maui/ namespace under KDE Invent. - Mauikit— - Index—. - Pix—. - VVave—. - Buho—. - Nota—. - Station—. Maui @ KDE Review For the upcoming release, we are going through the KDE review process for Index, VVave, Pix, and MauiKit. The next batch of applications to follow is Buho, Nota, and Station. For more information about this process, you can check. Documentation The documentation of the MauiKit framework is a work-in-progress; right now, the progress is around 70% done, missing inline documentation to QML files. For the next stable release, we expect it to be ready and up on the webpage. For this labor, we are making use of the Doxygen tool, and once the documentation is done, the next step is to generate the HTML files to be added to the MauiKit documentation webpage at Gitbook: And then update the page with more up-to-date information on the Human Interface Guidelines, the references, and examples. If you are interested in contributing to the Maui Project, this would be an excellent place to start; you would help to document while getting to know the framework components. Translations The project now makes usage of KDE Frameworks i18n instead of the Qt built-in system is used across all the supported platforms. What’s new We have been working on polishing the look and feel of our applications and making it easier to achieve a cohesive look from the MauiKit framework built-in components without too much hassle from the application side, with new and updated controls. We are also working on bringing convergent elements that can adapt UI and UX wise to different screens sizes, forms factors, inputs, and platforms, putting convergence to the front line. We have bumped all the projects to make usage of Qt 5.15 on all supported platforms: Linux, Windows, Android, and macOS, updating the source code by fixing warnings and issues with deprecated lines. MauiKit - MauiKit now makes usage of the KDE formatting style and has an autoformatting tool to keep the code clean. Fixes - The accounts dialog for handling online accounts has been fixed and now shows up correctly. - AppViews issue making the viewport buttons non-auto exclusive has been fixed. Tagging The Tagging utility has been cleaned up, and it is getting ready for a big performance boost. This utility allows us to tag files and share the tags across all the applications. Right now, Nota, Index, Pix, and VVave make use of it, so any tag added on one of then can be browsed in another application. - Besides adding tags to files, it had support for tagging abstract items, this support will be dropped, and it will focus only on tagging files. Dialogs The dialog popups have been clean up and now have a much more neat look. The overlay close button has been moved inside the popup, and the default action buttons style is used more widely. Some of the API changes are: - Different actions can be added using the actions property. Such actions are styled as the default buttons. - By default, the dialog is managed by a column layout, so any item placed on it should make use of the Layout attached properties for positioning its children. The contents are scrollable by default, so when adding items to the dialog, they should have an implicitHeight or a Layout.prefferredHeight, to preserve the scrollable behavior. - To override the default dialog scrollable layout, there is a stack property that places the content on top of the layout implementation. The stack fills the whole available space of the dialog and is also handled by a column layout. - A dialog can be marked as persistent or not; if it is persistent, then the dialog only gets dismissed using an ESC shortcut, or by manually closing it; otherwise, the dialog can be closed by pressing outside of it. - Settings Dialog - Warning Dialog - About Dialog SettingsSection, SettingTemplate and SettingsDialog The settings dialog has received some UI fixes, and the sections represented by SettingsSection now make use of a more uniform layout. - The SettingsSection now makes usage of the LisItemTemplate for displaying the information, so besides adding a title and a description to a section, one can manipulate the display information using the template alias property, by adding, for example, an icon or image, or another column of labels. - Index Settings - Vvave Settings - Nota Settings ListItemTemplate This template represents a row layout containing an icon or image and two columns of labels. This template can be expanded by appending more items to the row. This template is widely used in many other components, like the SettingsSection, the AboutDialog, the TabButton, and as delegates in list views, etc. It has been cleaned up and now is more efficient to be used as delegates for models. ItemTemplate Drag and drop support for touch devices has been added; before, it only worked for desktops with mouse input. Drag and drop behavior has this workflow on touch inputs: - long press + release = right click. - long press + drag = drag. An animation is triggered once the drag is ready. The following video shows the workflow on a touch screen under Linux. GridView and ListBrowser Lasso selection now also works for touch inputs and has been added to the MauiKit GridView and ListBrowser components, so any app previously using such components now can also have lasso selection ready for touch screens. The workflow is similar to drag and drop; a long press and drag on an empty area will activate the lasso selection. FilePreviewer - Now for audio and video previews, the progress bar is displayed correctly. - The information list is now scrollable and makes consistent usage of the ListItemTemplate for the delegates. - The previewer allows to swipe to the next and previous items; this has been made more discoverable by adding navigation arrows to the header bar. alt="" width="1024" height="808" sizes="(max-width: 1024px) 100vw, 1024px" data-> TagsDialog - Now the dialog has a filter text field for filtering long lists of tags. - The Tagging backend now makes usage fo the Maui templated models, which allows quick filtering and sorting. - The Tagging component will drop support for tagging abstract items and will instead focus on tagging local files. alt="" width="754" height="725" sizes="(max-width: 754px) 100vw, 754px" data-> Doodle - The doodle component now has support for changing the brush type, shape, opacity, size, and color. The doodle is being tested to taking notes on PDF files for the Maui Library app, which is still under development. alt="" width="1024" height="733" sizes="(max-width: 1024px) 100vw, 1024px" data-> Share and OpenWith Dialog - Now both make usage of the MauiKit ListItemTemplate and have a more consistent look. alt="" width="2072" height="987" sizes="(max-width: 2072px) 100vw, 2072px" data-> SelectionBar - Now displays by default the action’s text beside the icon when the width available is wide enough. alt="" width="1024" height="600" sizes="(max-width: 1024px) 100vw, 1024px" data-> AboutDialog - Now is styled more consistently and takes full advantage of the new MauiKit scrollable Dialog layout. alt="" width="1024" height="495" sizes="(max-width: 1024px) 100vw, 1024px" data-> Maui Apps For the upcoming release, the stable apps continue to improve, and few new ones will enter a beta state. - Stable: Index, VVave, Pix, Nota, Buho, Station - Beta: Communicator, Library, Cinema, NX Software Center - Alpha: Sol (Web browser) - Beta - Stable Index - Under process for KDE Review. - Does not longer lists all the tags in the places sidebar. Instead, tags are accessible through the tags place-item. - The grid view now has better spacing between items for more accessible lasso selection and readability of file names. - The Index file manager now allows for installing APKs on Android. - Added shortcuts for quickly creating a new file or folder: Ctrl+N = new folder, Ctrl+Shift+N = new file. - Fix and unified the removing files dialog. - Optimized the destruction of splits and tabs. alt="" width="1024" height="616" sizes="(max-width: 1024px) 100vw, 1024px" data-> Vvave - Under process for KDE Review. - Quicker delegates. - Playlists now make use of the MauiKit Tagging component, so playlists are tagged audio files, so they also can be browsed in the file manager. - The Playlist view has been reworked to follow the style of the albums and artists’ views. - The albums and artists’ view now is more responsive: on constrained spaces, the grid becomes a list. - The artist artwork fetching is now back and uses the Spotify API, instead of LastFM. The Pulpo backend, which allows us to retrieve contextual information about the music files, has been cleanup and improved, updating the existing services like LastFM, Spotify, MusicBrainz, and genius. - Adding and removing sources now works as expected. - Cleaned up the database model, so if you experience issues, try removing the previous database file at ~/.local/share/vvave/collection.db. alt="" width="1024" height="652" sizes="(max-width: 1024px) 100vw, 1024px" data-> Pix - Under process for KDE Review. - Better grid view spacing when the image titles are visible. - The tags and folders view delegates have been improved and now display the tag or folder contents as a collage. - Add icon for macOS. - Added initial support for image metadata reading with exiv2. - New information dialog. - Initial support for basic image editing thanks to the work from - Fixed wrong links to the project web page and issues report page. - Folders view - Tags view - Info dialog Buho - The links feature to save web pages as bookmarks offline have been removed, and it is planned to move to the upcoming Maui Web browser. - Settings for managing the syncing of the notes and books have been added. - The UI has been cleanup. And a substantial visual design is to be unveiled. Nota - Added more actions to the selection bar, and the recent and document views now have a contextual menu to trigger relevant actions, like sharing, exporting, removing, and tagging. - Unified the unsaved files dialog warning and now correctly gets triggered. - We optimized the destruction of splits and tabs. - Correctly install the Nota text editor icon on Linux. - Add the option to open a blank file by default on launch. - Split views can now be closed on demand. Contacts/Communicator - Renamed to Communicator. Cinema - Added views to browse the file system video files to the app. Sol - Initial work on the web browser. Initial UI layout Maui Style - Improve styling of the buttons and menus and menu items and scrollbars. - Better Menus padding - Better styling support for Windows - A fix is on the way to use this style by default on all the platforms and keep a consistent look, no matter the platform. Whats’ Next Enable syncing of data and images on Pix, files in the Index file manager, and of contacts in Communicator. Make Cinema and Surf initial beta release. Our plans for 2021 are: - Move fully to CMake. - Add IOS support. - Data syncing using NextCloud. - Performance boost. - Cohesive look and feel across all platforms. - Move beta apps to stable. - More documentation.
https://nxos.org/weekly-summaries/maui-weekly-report-5/
CC-MAIN-2020-50
refinedweb
2,111
61.67
. TLDR: Hopsworks is the Data-Intensive AI platform with a Feature Store for building complete end-to-end machine learning pipelines. This tutorial will show an overview of how to work with Jupyter on the platform and train a state-of-the-art ML model using the fastai python library.. Jupyter provides an integrated development environment (IDE) allowing users to seamlessly mix code with visualization and comments, making it not just handy for prototyping, but also for visualization and educational purposes. In recent years, it has become a favourite tool employed by the many, used for data wrangling, data mining, statistical modeling, visualization, machine learning and the notebooks themselves scheduled to run in production at tech leaders such as Netflix. The Hopsworks platform ships with Jupyter as one of the integrated components. Having already been pre installed and configured to work with Spark and PySpark kernels, in addition to the Python kernel, getting started with writing your notebooks and scheduling them in production on a Hopsworks cluster is straightforward. The Hopsworks installation also includes a Miniconda environment with the most popular libraries you can find in a data scientists toolkit, such as TensorFlow, PyTorch and scikit-learn. In this tutorial we will describe how to work with a Jupyter notebook in the Hopsworks platform. As an example, we will demonstrate how to install fastai, a library that provides high-level components for building and training ML models to get state-of-the-art deep learning results, clone a set of notebooks from a git repository and train a model on a P100 GPU. To follow this tutorial you should have a Hopsworks instance running on. You can register for free, without providing credit card information, and receive USD 300 worth of free credits to get started. The only thing you need to do is to connect your cloud account. The tutorial requires that you have configured the Hopsworks cluster with AKS or EKS depending on your cloud provider, and that the Kubernetes nodes are equipped with GPUs. The first step in the tutorial is to install the fastai library. To get started, navigate to the Python service to install the fastai library from PyPi as shown in the example below. There are many different approaches to installing the library but in this instance we install the latest version of the fastai and nbdev package from PyPi, required to run the first notebook in the fastai course. In the Jupyter service page there are three different modes that can be configured. Firstly, there is a Python tab, in which configuration for the Python kernel, such as Memory/Cores/GPUs is set and optionally a git repository can also be configured that should be cloned when Jupyter starts up. This is the kernel that we are going to use in this tutorial. Secondly, in the Experiments tab the PySpark kernel is configured. If you want to enable all the features in the plattform regarding, experiment tracking, hyperparameter optimization, distributed training. See HopsML for more information on the Machine Learning pipeline. Thirdly, for general purpose notebooks, select the Spark tab and run with Static or Dynamic Spark Executors on Spark or PySpark. The image below shows the configuration options set for the Python kernel. As working with larger ML models can be memory intensive make sure you are configuring the Memory for the kernel to be at least 8GB, then set GPUs to 1 to allocate a GPU that should be accessible for the kernel and set the git configuration to clone the fastai git repository to get access to the notebooks. Once the configuration has been entered for the Python kernel, press the button on the top that says JupyterLab to start the Notebook Server. Keep in mind that it may take some time as resources need to be allocated for the Notebook Server and to clone the git repository. The image below demonstrates the process of starting Jupyter. The Jupyter Notebook Server will now have been allocated a GPU which you can use in the Python kernel. To check the type and specifications of the GPU, open a new terminal inside Jupyter and run nvidia-smi. We can see that in this instance we have access to a P100 NVIDIA GPU. Now you’re all set to start following the course material that fastai provides. To make sure the GPU is being utilized you can leave a terminal window open and run nvidia-smi -l 1, which will print out the GPU utilization every second while you are running the training in the notebook. In the example below, the first notebook lesson1-pets.ipynb in the fastai course is executed. Hopsworks is available both on AWS and Azure as a managed platform. Visit hopsworks.ai to try it out. TLDR; A new class of hierarchical distributed file system with scaleout metadata has taken over at Google, Facebook, and Microsoft that provides a single centralized file system that manages the data for an entire data center, scaling to Exabytes in size. The common architectural feature of these systems is scaleout metadata, so we call them scaleout metadata file systems. Scaleout metadata file systems belie the myth that hierarchical distributed file systems do not scale, so you have to redesign your applications to work with object stores, and their weaker semantics. We have built a scaleout metadata file system, HopsFS, that is open-source, but its primary use case is not Exabyte storage, rather customizable consistent metadata for the Hopsworks Feature Store. Scaleout metadata is also the key technology behind Snowflake, but here we stick to file systems. Google, Microsoft, and Facebook have been pushing out the state-of-the-art in scalable systems research in the last 15 years. Google has presented systems like MapReduce, GFS, Borg, and Spanner. Microsoft introduced CosmosDB, Azure Blob Storage and federated YARN. Facebook has provided Hive, Haystack, and F4 systems. All of these companies have huge amounts of data (Exabytes) under management, and need to efficiently, securely, and durably store that data in data centers. So, why not unify all storage systems within a single data center to more efficiently manage all of its data? That’s what Google and Facebook have done with Colossus and Tectonic, respectively. The other two scaleout metadata file systems covered here, ADLSv2 and HopsFS, were motivated by similar scalability challenges but, although they could be, they are typically not deployed as data center scale file systems, just as scalable file systems for analytics and machine learning. First generation hierarchical distributed file systems (like HDFS) were not scalable enough in the cloud, motivating the move to object stores (like S3) as the cloud-native storage service of choice. However, the move to object stores is not without costs. Many applications need to be rewritten as the stronger POSIX-like behaviour of hierarchical file systems (atomic move/rename, consistent read-after-writes) has been replaced by weakened guarantees in object stores. In particular, data analytics frameworks traditionally rely on atomic rename to provide atomic update guarantees for updating columnar data stores. The lack of atomic rename in S3 has been one of the motivations for the introduction of new columnar store frameworks for analytics and ML, such as Delta Lake, Apache Hudi, and Apache Iceberg that provide ACID guarantees for updating tables over object stores. These frameworks add metadata to files in the object store to provide the ACID guarantees, but their performance lags behind systems built on mutable scaleout metadata, underpinning columnar data stores such as Snowflake and BigQuery. Hierarchical file systems typically provide well-defined behaviour (a POSIX API) for how a client can securely create, read, write, modify, delete, organize, and find files. The data in such file systems is stored in files as blocks or extents. A file is divided up into blocks, and distributed file systems spread and replicate these blocks over many servers for improved performance (you can read many blocks in parallel from different block servers) and high availability (failure of a block server does not cause the file system to go down, as replicas of that block are still available on other block servers). However, the data about what files, directories, blocks, and file system permissions are in the system have historically been stored in a single server called the metaserver or namenode. We call this data about the file system objects metadata. In file systems like HDFS, the namenode stores its metadata in-memory to improve both latency and throughput in the number of metadata operations it can support per second. Example metadata operations are: create a directory, move or rename a file or directory, change file permissions or ownership. Operations on files and some operations on directories (such as `rm -rf`) require both updates to metadata and to the blocks stored on the block servers. As the size of data under management by distributed file systems increased, it was quickly discovered that metadata servers became a bottleneck. For example, HDFS could scale to, at a push, a Petabyte, but not handle more than 100K reads/sec and only a few thousand writes/sec. It has long been desired to re-architect distributed file systems to shard their metadata across many servers to enable them to support (1) larger volumes of metadata and (2) more operations/second. But it is a very hard problem. Read here about the contortions Uber applies to get its HDFS’ namenode to scale instead of re-designing a scaleout metadata layer from scratch. When sharding the state of the metadata server over many servers, you need to make decisions about how to do it. Google used its existing BigTable key-value store to store Colossus’ metadata. Facebook, similarly, chose the ZippyDB key-value store for Tectonic. Microsoft built their own Replicated State Library - Hekaton Ring Service (RSL-HK) to scale-out ADLS’ metadata. The RSL-HK ring architecture combines Paxos-based metadata with Hekaton (in-memory engine from SQL Server). HopsFS used NDBCluster (now RonDB) to scale out its metadata. The capabilities of these underlying storage engines are reflected in the semantics provided by the higher level file systems. For example, Tectonic and (probably) Colossus do not support atomic move of files from any directory to any other directory. Their key-value stores do not support agreement protocols across shards (only within a shard). So, at the file system level, you introduce an abstraction like a file system volume (Tectonic calls them tenants), and users then know they can perform atomic rename/move within that volume, but not across volumes. Google solves this problem at a higher layer for structured data with Spanner by implementing two-phase commit transactions to ensure consistency across shards. In contrast, RSL-HK Ring by Microsoft and RonDB by Logical Clocks support cross-shard transactions that enable both ADLSv2 and HopsFS to support atomic rename/move between any two paths in the file system. To put this in database terms, the consistency models provided by the scaleout metadata file systems are tightly coupled to the capabilities provided by the underlying metadata store. If the store does not support cross-partition transactions - consistent operations across multiple shards, you will not get strongly consistent cross-partition file system operations. For example, if the metadata store is a key-value store, where each shard typically maintains strongly consistent key-value data using Paxos. But Paxos do not compose - you cannot run Paxos between two shards that themselves maintain consistency using Paxos. In contrast, RonDB supports 2-phase commit (2PC) across shards, enabling strongly consistent metadata operations both within shards and across shards. Once a scaleout metadata storage layer is in place, stateless services can be used to provide access control and implement background maintenance tasks like maintaining the durability and availability of data, disk space balancing, and repairing blocks. We can see that Hadoop File System APIs are still popular, as they model the contents of a filesystem as a set of paths that are either directories, symbolic links, or files, but address the challenge of scalability by restricting the POSIX-like semantics with append-only writers (there is no support for writing at random offsets in files). With a scaleout metadata file system, you can have many more concurrent clients, leading to the well-known problem of hotspots - overloaded read/writes that are handled by a single shard. For example, Tectonic, ADLS, and HopsFS all ensure that objects (files/directories) in a directory are co-located in the same shard for efficient low latency directory listing operations. However, if the directory contains millions of files, such an operation can overload the threads responsible for handling operations on that shard. HopsFS and Tectonic randomly spread independent directories across shards to prevent hotspots, while ADLS supports range partitioning. Another well-known technique from object-stores, like S3, is used by ADLS - paged enumeration of directories. This requires clients to perform many iterative operations to list all objects in a large directory, but enables client quotas to kick in throttle and clients before they overload a shard. Blocks are a logical unit of storage that hides the complexity of raw data storage and durability from the upper layers of the filesystem. In earlier generations of distributed file systems, such as HDFS, full replicas of blocks were stored at different data nodes to ensure high availability of file blocks. However, object stores and scaleout metadata file systems have eschewed full replicas and instead ensure high availability of file blocks using Reed-Solomon (RS) Coding. RS-encoded blocks provide higher availability guarantees and lower storage overhead, but with the disadvantage of more CPU and network bandwidth required to recover lost blocks. Given the continued growth in network bandwidth and available CPU cycles, this tradeoff is favorable. There is a general trend towards smaller blocks, enabling faster recovery of failed blocks and faster availability of blocks to readers, but the cost is the need for more metadata storage capacity and higher available throughput in ops/sec at the metadata service. Both Colossus and Tectonic provide rich clients that can customize the types of blocks and RS coding needs, depending on the workload needed by the client. For example, blob storage requires frequent appends and is handled differently from writing tabular data. Although neither Tectonic or Colossus discussed the block sizes they support, it is safe to assume that they support blocks all the way down to a few MBs in size. ADLSv2 stores its block data in Azure Blob Storage (ABS). HopsFS, the managed service on Hopsworks, also stores its blocks as objects in object storage (S3 on AWS and ABS on Azure). On premises, HopsFS stores its blocks replicated as fixed-size files replicated across data nodes. When your ambition is to store data for the entire data center, you need to support many different storage technologies with different cost/storage trade-offs. As such, Colossus, HopsFS, ADLSv2, and Tectonic all support storing data in tiers: magnetic disks, SSDs, NVMe, in-memory. Among these systems, HopsFS has unique support for storing small files in the scaleout metadata layer for higher performance operations on small files. HopsFS takes a different approach to using scaleout metadata. Instead of using it, primarily, to build exascale file systems, it, and even enables polyglot storage and querying of metadata in both RonDB (SQL) and Elasticsearch (free-text search). This simplifies operations and provides new free-text search capabilities compared to existing ML metastores (TFX, MLFlow). The same approach enabled us to be the first to release an open-source Feature Store for ML based on Hopsworks. When building our Feature Store, instead of needing to build a separate artifact store (file system) and metastore (database) and write complex protocols to ensure the consistency of both stores, we had a single consistent storage system, where artifacts can be easily extended with consistent metadata that can be queried using free-text search. Features can be annotated with statistics and tags in metadata, training data Some important requirements for extensible file system metadata are that it: Even though we first heard about Colossus’ architecture in 2009 and its name in 2012, Google has been surprisingly secretive about the lowest layer of their scalable storage and compute architecture. However, after the release of Tectonic (coincidence?) in early 2021, Google released more details on Colossus in May 2021. Colossus’ metadata storage service is BigTable, which does not support cross-shard transactions. We assume this means that Colossus lacks atomic rename, a hole that is filled for tabular data (at least) by Spanner, which supports cross-shard transactions. In Colossus, file system clients connect to curators to perform metadata operations, who, in turn, talk to BigTable. Custodians perform file system maintenance operations, and “D” services provide block storage services, where clients read/write blocks directly from/to “D” servers. Different clients of Colossus can store their data on different volumes (metadata shards). Atomic rename is possible within a volume, but not across volumes. Tectonic was first announced as a file system at USENIX Fast 2021, and it unifies Facebook’s previous storage services (federated HDFS, Haystack, and others) to provide a data-center scale file system. Similar to Colossus, Tectonic stores its metadata in a key-value store, but in this case in ZippyDB. As ZippyDB lacks cross-partition transactions, cross-namespace file system operations are not supported. That is, you cannot atomically move a file from one volume (metadata shard) to another. Often, such operations are not needed, as all the data for a given service can fit in a single namespace, and there are no file system operations between different applications. There are separate stateless services to manage the name space, blocks, files, and file system maintenance operations. Azure Data Lake Storage (ADLS) was first announced at Sigmod 2017 and it supports Hadoop distributed file system (HDFS) and Cosmos APIs. It has since been redesigned as Azure Data Lake Gen 2 (ADLSv2) that provides multi-protocol support to the same data using the Hadoop File System API, the Azure Data Lake Storage API and the Azure Blob storage API. Unlike Colossus and Tectonic, it is available for use as a service - but only on Azure. The most recent information about ADLS’ architecture is the original paper describing ADLS from 2017 - no architecture has been published yet for ADLSv2. However, ADLS used RSL-HK to store metadata and it has a key-value store (ring) with shards using state machine replication (Paxos) and with transactions across shards, al in an in-memory engine (“It implements a novel combination of Paxos and a new transactional in-memory block data management design.”). HopsFS was first announced at USENIX Fast 2017 and provides a HDFS API. HopsFS is a rewrite of HDFS and it supports multiple stateless namenode (metadata servers), where the leader performs file system maintenance operations, and a pluggable metadata storage layer. HopsFS provides a DAL API to support different metadata storage engines. Currently the default engine for HopsFS is RonDB (a fork of NDB Cluster, the storage engine for MySQL Cluster), a scalable key-value store with SQL capabilities. RonDB can scale to handle hundreds of millions of transactional reads per second and 10s of millions of transactional writes per second and it provides both a native key-value API and a SQL API via a MySQL Server. RonDB also provides a CDC (change-data-capture) API to allow us to automatically replicate changes in metadata to Elasticsearch, providing a free-text search API to HopsFS’ metadata (including its extended metadata). Metadata can be queried using any of the 3 APIs: the native key-value API for RoNDB, the SQL API, or using free-text search in Elasticsearch. HopsFS scales the Namespace Layer with RonDB and Stateless Namenodes, while the block layer is cloud object storage. The journey from a stronger POSIX-like file system to a weaker object storage paradigm and back again has parallels in the journey that databases have made in recent years. Databases made the transition from strongly consistent single-host systems (relational databases) to highly available (HA), eventually consistent distributed systems (NoSQL systems) to handle the massive increases in data managed by databases. However, NoSQL is just too hard for developers, and databases are returning to strongly consistent (but now scalable) NewSQL systems, with databases such as Spanner, CockroachDB, SingleSQL, and NDB Cluster. The scaleout metadata file systems, introduce here, show that distributed hierarchical file systems are completing a similar journey, going from strongly consistent POSIX-compliant file systems to object stores (with their weaker consistency models), and back to distributed hierarchical file systems that are have solved the scalability problem by redesigning the file system around a mutable, scaleout metadata service. T. TLDR: Hopsworks is the Data-Intensive AI platform with a Feature Store for building complete end-to-end machine learning pipelines. This tutorial will show an overview of how to install and manage Python libraries in the platform. Hopsworks provides a Python environment per project that is shared among all the users in the project. All common installation alternatives are supported such as using the pip and conda package managers, in addition to libraries packaged in a .whl or .egg file and those that reside on a git repository. Furthermore, the service includes automatic conflict/dependency detection, upgrade notifications for platform libraries and easily accessible installations logs. The Python ecosystem is huge and seemingly ever growing. In a similar fashion, the number of ways to install your favorite package also seems to increase. As such, a data analysis platform needs to support a great variety of these options. The Hopsworks installation ships with a Miniconda environment that comes preinstalled with the most popular libraries you can find in a data scientists toolkit, including TensorFlow, PyTorch and scikit-learn. The environment may be managed using the Hopsworks Python service to install or libraries which may then be used in Jupyter or the Jobs service in the platform. In this blog post we will describe how to install a python library, wherever it may reside, in the Hopsworks platform. As an example, this tutorial will demonstrate how to install the Hopsworks Feature Store client library, called hsfs. The library is Apache V2 licensed, available on github and published on PyPi. To follow this tutorial you should have an Hopsworks instance running on. You can register for free, without providing credit card information, and receive USD 4000 worth of free credits to get started. The only thing you need to do is to connect your cloud account. The first step to get started with the platform and install libraries in the python environment is to create a project and then navigate to the Python service. When a project is created, the python environment is also initialized for the project. An overview of all the libraries and their versions along with the package manager that was used to install them are listed under the Manage Environment tab. The simplest alternative to install the hsfs library is to enter the name as is done below and click Install as is shown in the example. The installation itself can then be tracked under the Ongoing Operations tab and when the installation is finished the library appears under the Manage Environment tab. If hsfs would also have been available on an Anaconda repo, which is currently not the case, we would need to specify the channel where it resides. Searching for libraries on Anaconda repos is accessed by setting Conda as the package location. If a versioned installation is desired to get the hsfs version compatible with a certain Hopsworks installation, the search functionality shows all the versions that have been published to PyPi in a dropdown. Simply pick a version and press Install. The example found below demonstrates this. Many popular Python libraries have a great variety of builds for different platforms, architectures and with different build flags enabled. As such it is also important to support directly installing a distribution. To install hsfs as a wheel requires that the .whl file was previously uploaded to a Dataset in the project. After that we need to select the Upload tab, which means that the library we want to install is contained in an uploaded file. Then click the Browse button to navigate in the file selector to the distribution file and click Install as the following example demonstrates. Installing libraries one by one can be tedious and error prone, in that case it may be easier to use a requirements.txt file to define all the dependencies. This makes it easier to move your existing Python environment to Hopsworks, instead of having to install each library one by one. The file may for example look like this. This dependency list defines that version 2.2.7 of hsfs should be installed, along with version 2.9.0 of imageio and the latest available release for mahotas. The file needs to be uploaded to a dataset as in the previous example and then selected in the UI. A great deal of python libraries are hosted on git repositories, this makes it especially handy to install a library during the development phase from a git repository. The source code for the hsfs package is as previously mentioned, hosted on a public github repository. Which means we only need to supply the URL to the repository and some optional query parameters. The subdirectory=python query parameter indicates that the setup.py file, which is needed to install the package, is in a subdirectory in the repository called python. After each installation or uninstall of a library, the environment is analyzed to detect libraries that may not work properly. As hsfs depends on several libraries, it is important to analyze the environment and see if there are any dependency issues. For example, pandas is a dependency of hsfs and if uninstalled, an alert will appear that informs the user that the library is missing. In the following example pandas is uninstalled to demonstrate that. When new releases are available for the hsfs library, a notification is shown to make it simple to upgrade to the latest version. Currently, only the hops and hsfs libraries are monitored for new releases as they are utility libraries used to interact with the platform services. By clicking the upgrade text, the library is upgraded to the recommended version automatically. Speaking from experience, Python library installations can fail for a seemingly endless number of reasons. As such, to find the cause, it is crucial to be able to access the installation logs in a simple way to find a meaningful error. In the following example, an incorrect version of hsfs is being installed, and the cause of failure can be found a few seconds later in the logs. Hopsworks is available both on AWS and Azure as a managed platform. Visit hopsworks.ai to try it out.. This blog introduces how RonDB handles automatic thread configuration. It is more technical and dives deeper under the surface of how RonDB operates. RonDB provides a configuration option, ThreadConfig, whereby the user can have full control over the assignment of threads to CPUs, how the CPU locking is to be performed and how the thread should be scheduled. However, for the absolute majority of users this is too advanced, thus the managed version of RonDB ensures that this thread configuration is based on best practices found over decades of testing. This means that every user of the managed version of RonDB will get access to a thread configuration that is optimised for their particular VM size. In addition RonDB makes use of adaptive CPU spinning in a way that limits the power usage, but still provides very low latency in all database operations. Adaptive CPU spinning improves latency by up to 50% and in most cases more than 10% improvement. RonDB 21.04 uses automatic thread configuration by default. This means that as a user you don’t have to care about the configuration of threads. What RonDB does is that it retrieves the number of CPUs available to the RonDB data node process. In the managed version of RonDB, this is the full VM or bare metal server available to the data node. In the open source version of RonDB, one can also limit the amount of CPUs available to RonDB data nodes process by using taskset or numactl when starting the data node. RonDB retrieves information about CPU cores, CPU sockets, and connections to the L3 caches of the CPUs. All of this information is used to set up the optimal thread configuration. LDM threads house the data, query threads handle read committed queries, tc threads handle transaction coordination, receive threads handle incoming network messages, send thread handle the sending of network messages, and main threads handle metadata operations, asynchronous replication and a number of other things. LDM thread is a key thread type. The LDM thread is responsible for reading and writing data. It manages the hash indexes, the ordered indexes, the actual data, and a set of triggers performing actions for indexes, foreign keys, full replication, asynchronous replication. This thread type is where most of the CPU processing is done. RonDB has an extremely high number of instructions per cycle compared to any other DBMS engine. The LDM thread often executes 1.25 instructions per cycle where many other DBMS engines have reported numbers around 0.25 instructions per cycle. This is a key reason why RonDB has such a great performance both in terms of throughput and latency. This is the result of the design of data structures in RonDB that are CPU cache aware and also due to the functional separation of thread types. Query thread is a new addition that was introduced in NDB Cluster 8.0.23. In NDB this is not used by default, RonDB enables the use of query threads by default in the automatic thread configuration. The query threads run the same code as the LDM threads and handles a subset of the operations that the LDM can handle. A normal SELECT query will use read committed queries that can be executed by the query threads. A table partition (sometimes referred to as a table fragment or shard) belongs to a certain LDM thread, thus only this LDM thread can be used for writes and locked reads on rows in this table partition. However for read committed queries, the query threads can be used. To achieve the best performance RonDB uses CPU locking. In Linux, it is quite common that a thread migrates from one CPU to another CPU. If the thread migrates to a CPU belonging to a different CPU core, the thread will suffer a lot of CPU cache misses immediately after being migrated. To avoid this, RonDB locks threads to specific CPU cores. Thus, it is possible to migrate the thread, but only to another CPU in a CPU core that shares the same CPU caches. Query threads and LDM threads are organised into Round Robin groups. Each Round Robin group consists of between 4 and 8 LDM threads and the same amount of query threads. All threads within one Round Robin group share the same CPU L3 cache. This ensures that we retain the CPU efficiency even with the introduction of these new query threads. This is important since query threads introduce new mutexes and the performance of these are greatly improved when threads sharing mutexes also share CPU caches. The query thread chosen to execute a query must be in the same Round Robin group as the data owning LDM thread is. Query threads make it possible to decrease the amount of partitions in a table. As an example, we are able to process more than 3 times as many transactions per second using a single partition in Sysbench OLTP RW compared to when we only use LDM threads. Most key-value stores have data divided into table partitions for the primary key of the table. Many key-value stores also contain additional indexes on columns that are not used for partitioning. Since the table is partitioned, this means that each table partition will contain each of those additional indexes. When performing a range scan on such an index, each table partition must be scanned. Thus the cost of performing range scans increases as the number of table partitions increases. RonDB can scale the reads in a single partition to many query threads, this makes it possible to decrease the number of table partitions in RonDB. In Sysbench OLTP RW this improves performance by around 20% even in a fairly small 2-node setup of RonDB. In addition query threads ensure that hotspots in the tables can be handled by many threads, thus avoiding the need to partition even more to handle hotspots. At the same time a modest amount of table partitions increases the amount of writes that we can perform on a table and it makes it possible to parallelise range scans which will speed up complex query execution significantly. Thus in RonDB we have attempted to find a balance between overhead and improved parallelism and improved write scalability. The cost of key lookups is not greatly affected by the number of partitions since those use a hash lookup and thus always go directly to the thread that can execute the key lookup. RonDB locks LDM threads and query threads in pairs. There is one LDM thread and one query thread in each such LDM group, we attempt to lock this LDM Group to one CPU core. LDM Groups are organised into Round Robin Groups. A common choice for a scheduling algorithm in an architecture like this would be to use a simple round robin scheduler. However such an algorithm is too simple for this model. We have two problems to overcome. The first is that the load on LDM threads is not balanced since we have decreased the number of table partitions in a table. Second writes and locked reads can only be scheduled in an LDM thread. Thus it is important to use the Read Committed queries to achieve a balanced load. Since LDM threads and query threads are locked onto the same CPU core it is ok for an LDM thread to be almost idle and we will still be efficient since the query thread on this CPU core will be very efficient. When a query can be scheduled to both an LDM thread and the query threads in the same Round Robin group the following two-level scheduling algorithm is used. We gather statistics about CPU usage of threads and we also gather queue lengths in the scheduling queues. Based on this information we prioritise selecting the LDM thread and the query thread in the same LDM group. However, if required to achieve a balanced use of the CPU resources in the Round Robin group we will also schedule read committed queries to any query thread in the Round Robin group of the LDM thread. The gathered CPU usage information affects the load balancer with a delay of around 100 milliseconds. The queue length information makes it possible to adapt to changing load in less than a millisecond. Given that we use less table partitions in RonDB compared to other solutions, there is a risk of imbalanced load on the CPUs. This problem is solved by two things. First, we use a two-level load balancer on LDM and Query threads. This ensures that we will move away work from overloaded LDM threads towards unused query threads. Second, since the LDM and Query threads share the same CPU core we will have access to an unused CPU core in query threads that execute on the same CPU core as an LDM thread that is currently underutilized. Thus, we expect that this architecture will achieve a balanced load on the CPU cores in the data node architecture. LDM and query threads use around 50-60% of the available CPU resources in a data node. The tc threads receive all database operations sent from the NDB API. They take care of coordinating transactions and decide which node should take care of the queries. They use around 20-25% of the CPU resources. The NDB API selects tc threads in a node using a simple round robin scheme. The receive threads take care of a subset of the communication links. Thus, the receive thread load is usually fairly balanced but can be a bit more unbalanced if certain API nodes are more used in querying RonDB. The communication links between data nodes in the same node group are heavily used when performing updates. To ensure that RonDB can scale in this situation these node links use multiple communication links. Receive threads use around 10-15% of the CPU resources. The send threads assist in sending networking messages to other nodes. The sending of messages can be done by any thread and there is an adaptive algorithm that assigns more load for sending to threads that are not so busy. The send threads assists in sending to ensure that we have enough capacity to handle all the load. It is not necessary to have send threads, the threads can handle sending even without a send thread. Send threads use around 0-10% of the CPUs available. The total cost of sending can be quite substantial in a distributed database engine, thus the adaptive algorithm is important to balance out this load on the various threads in the data node. The number of main threads supported can be 0, 1 or 2. These threads handle a lot of the interactions around creating tables, indexes and any other metadata operation. They also handle a lot of the code around recovery and heartbeats. They are handling any subscriptions to asynchronous replication events used by replication channels to other RonDB clusters. RonDB is based on NDB Cluster. NDB was focused on being a high-availability key-value store from its origin in database research in the 1990s. The thread model in NDB is inherited from a telecom system developed in Ericsson called AXE. Interestingly in one of my first jobs at Philips I worked on a banking system developed in the 1970s, this system had a very similar model compared to the original thread model in NDB and in AXE. In the operating system development time-sharing has been the dominant model since a long time back. However the model used in NDB where the execution thread is programmed as an asynchronous engine where the application handles a state machine has a huge performance advantage when handling many very small tasks. A normal task in RonDB is a key lookup, or a small range scan. Each of those small tasks is actually divided even further when performing updates and parallel range scans. This means that the length of a task in RonDB is on the order of 500 ns up to around 10 microseconds. Time-sharing operating systems are not designed to handle context switches of this magnitude. NDB was designed with this understanding from the very beginning. Early competitors of NDB used normal operating system threads for each transaction and even in a real-time operating system this had no chance to compete with the effectiveness of NDB. None of these competitors are still around competing in the key-value store market. The first thread model in NDB used a single thread to handle everything, send, receive, database handling and transaction handling. This is version 1 of the thread architecture, that is also implemented in the open source version of Redis. With the development of multi-core CPUs it became obvious that more threads were needed. What NDB did here was introduce both a functional separation of threads and partitioning the data to achieve a more multi-threaded execution environment. This is version 2 of the thread architecture. Modern competitors of RonDB have now understood the need to use asynchronous programming to achieve the required performance in a key-value store. We see this in AeroSpike, Redis, ScyllaDB and many other key-value stores. Thus the industry has followed the RonDB road to achieving an efficient key-value store implementation. Most competitors have opted for only partitioning the data and thus each thread still has to execute all the code for meta data handling, replication handling, send, receive and database operations. Thus RonDB has actually advanced version 2 of the thread architecture further than its competitors. All modern CPUs use both a data cache and an instruction cache. By combining all functions inside one thread, the instruction cache will have to execute more code. In RonDB the LDM thread only executes the operation to change the data structures, the tc thread only executes code to handle transactions and the receive thread can focus on the code to execute network receive operations. This makes each thread more efficient. The same is true for the CPU data cache, the LDM thread need not bother with the data structures used for transaction handling and network receive. It can focus the CPU caches on the requirements for database operations which is challenging enough in a database engine. A simple splitting of data into different table partitions makes sense if all operations towards the key-value store are primary key lookups or unique key lookups. However most key-value stores also require performing general search operations as part of the application. These search operations are implemented as range scans with search conditions, these scale not so well with a simple splitting of data. To handle this, RonDB introduces version 3 of the thread architecture that uses a compromise where we still split the data, but we introduce query threads to assist the LDM threads in reading the data. Thus RonDB can handle hotspots of data and require fewer number of table partitions to achieve the required scalability of the key-value store. Thoughts on a v4 of the thread architecture have already emerged, so expect this development to continue for a while more. This includes even better handling of the higher latency to persistent memory data structures. Finally, even if a competitor managed to replicate all of those features of RonDB, RonDB has another ace in the 3-level distributed hashing algorithm that makes use of a CPU cache aware data structure. All of those things combined makes us comfortable that RonDB will continue to lead the key-value store market in terms of LATS: lowest Latency, highest Availability, the highest Throughput and the most Scalable data storage. Thus, being the best LATS database in the industry. Online feature stores are the data layer for operational machine learning models - the models that make online shopping recommendations for you and help identify financial fraud. When you train a machine learning model, you feed it with high signal-to-noise data called features. When the model is used in operation, it needs the same types of features that it was trained on (e.g., how many times you used your credit card during the previous week), but the online feature store should have low latency to keep the end-to-end latency of using a model low. Using a model requires both retrieving the features from the online feature store and then sending them to the model for prediction. Hopsworks has been using NDB Cluster as our online feature store from its first release. It has the unique combination of low latency, high availability, high throughput, and scalable storage that we call LATS. However, we knew we could make it even better as an online feature store in the cloud, so we asked one of the world’s leading database developers to do it - the person who invented NDB, Mikael Ronström. Together we have made RonDB, a key-value store with SQL capabilities, that is the world’s most advanced and performant online feature store. Although NDB Cluster is open-source, its adoption has been hampered by an undeserved reputation of being challenging to configure and operate. With RonDB, we overcome this limitation by providing it as a managed service in the cloud on AWS and Azure. The main requirements from a database used as an online feature store are: low latency, high throughput for mixed read/write operations, high availability and the ability to store large data sets (larger than fit on a single host). We unified these properties in a single muscular term LATS: LATS: low Latency, high Availability, high Throughput, scalable Storage. RonDB is not without competition as the premier choice as an online feature store. To quote Khan and Hassan from DoorDash, it should be a low latency database: “latency on feature stores is a part of model serving, and model serving latencies tend to be in the low milliseconds range. Thus, read latency has to be proportionately lower.” To that end, Redis fits this requirement as it is an in-memory key-value store (without SQL capabilities). Redis is open source (BSD Licence), and it enjoys popularity as an online feature store. Doordash even invested significant resources in increasing Redis’ storage capacity as an online feature store, by adding custom serialization and compression schemes. Significantly, similar to RonDB, it provides sub-millisecond latency for single key-value store operations. There are other databases that have been proposed as online feature stores, but they were not considered in this post as they have significantly higher latency (over one order-of-magnitude!), such as DynamoDB, BigTable, and SpliceMachine. As such, we thought it would be informative to compare the performance of RonDB and Redis as an online feature store. The comparison was between Redis open-source and RonDB open-source (the commercial version of Redis does not allow any benchmarks). In addition to our benchmark, we compare the innards of RonDB’s multithreading architecture to the commercial Redis products (since our benchmark identifies CPU scalability bottlenecks in Redis that commercial products claim to overcome). In this simple benchmark, I wanted to compare apples with apples, so I compared open-source RonDB to the open-source version of Redis, since the commercial versions disallow reporting any benchmarks. In the benchmark, I deliberately hobble the performance of RonDB by configuring it with only a single database thread, as Redis is “a single-threaded server from the POV of command execution”. I then proceed to describe the historical evolution of RonDB’s multithreaded architecture, consisting of three different generations, and how open-source Redis is still at the first generation, while commercial Redis products are now at generation two. Firstly, for our single-threaded database benchmark, we performed our experiments on a 32-core Lenovo P620 workstation with 64 GB of RAM. We performed key-value lookups. Our experiments show that a single-threaded RonDB instance reached around 1.1M reads per second, while Redis reached more than 800k reads per second - both with a mean latency of around 25 microseconds. The throughput benchmark performed batch reads with 50 reads per batch and had 16 threads issuing batch requests in parallel. Batching reads/writes improves throughput at the cost of increased latency. On the same 32-core server, both RonDB and Redis reached around 600k writes per second when performing SET for Redis and INSERT, UPDATE or DELETE operations for RonDB. For high availability, both of those tests were done with a setup using two replicas in both RonDB and in Redis. We expected that the read latency and throughput of RonDB and Redis would be similar since both require two network jumps to read data. In case of updates (and writes/deletes), Redis should have lower latency since an update is only performed on the main replica before returning. That is, Redis only supports asynchronous replication from the main replica to a backup replica, which can result in data loss on failure of the main node. In contrast, RonDB performs an update using a synchronous replication protocol that requires 6 messages (a non-blocking version of two-phase commit). Thus, the expected latency is 3 times higher for RonDB for writes. A comparison of latency and throughput shows that RonDB already has a slight advantage in a single-threaded architecture, but with its third-generation multithreaded architecture, described below, RonDB has an even bigger performance advantage compared to Redis commercial or open-source. RonDB can be scaled up by adding more CPUs and memory or scaled out, by automatically sharding the database. As early as 2013, we developed a benchmark with NDB Cluster (RonDB’s predecessor) that showed how NDB could handle 200M Reads per second in a large cluster of 30 data nodes with 28 cores each. The story on high availability is different. A write in Redis is only written to one replica. The replication to other replicas is then done asynchronously, thus consistency can be seriously affected by failures and data can be lost. An online feature store must accept writes that change the online features constantly in parallel with all the batched key reads. Thus handling node failures in an online feature store must be very smooth. Given that an online feature store may need to scale to millions of writes per second as part of a normal operation, this means that a failed node can cause millions of writes to be lost, affecting the correctness and quality of any models that it is feeding with data. RonDB has transactional capabilities that ensure that transactions can be retried in the event of such partial failures. Thus, as long as the database cluster is not fully down, no transactions will be lost. In many cases the data comes from external data sources into the online Feature Store, so a replay of the data is possible, but an inconsistent state of the database can easily lead to extra unavailability in failure situations. Since an online feature store is often used in mission-critical services, this is clearly not desirable. RonDB updates all replicas synchronously as part of writes. Thus, if a node running the transaction coordinator or a participant fails, the cluster will automatically fail over to the surviving nodes, a new transaction coordinator will be elected (non-blocking), and no committed transactions will be lost. This is a key feature of RonDB and has been tested in the most demanding applications for more than 15 years and tested thousands of times on a daily basis. Additionally it can be mentioned that in a highly available setup, in a cloud environment RonDB can read any replica and still see the latest changes whereas Redis will have to read the main replica to get a consistent view of the data and this will, in this case, require communicating across availability zones which can easily add milliseconds to latency for reads. RonDB will automatically setup the cluster such that applications using the APIs will read replicas that are located in the same availability zone. Thus in those setups RonDB will always be able to read the latest version of the data and still deliver data at the lowest possible latency. Redis setups will have to choose between delivering consistent data with higher latency or inconsistent data with low latency in this setup. Redis only supports in-memory data - this means that Redis will not be able to support online Feature Stores that store lots of data. In contrast, RonDB can store data both in-memory and on-disk, and with support for up to 144 database nodes in a cluster, it can scale to clusters of up to 1PB in size. For our single-threaded benchmark, we did not expect there to be, nor were there, any major differences in throughput or latency for either read or write operations. The purpose of the benchmark was to show that both databases are similar in how efficiently they use a single CPU. RonDB and Redis are both in-memory databases, but the implementation details of their multithreaded architectures matters for scalability (how efficiently they handle increased resources), as we will see. Firstly, “Redis is not designed to benefit from multiple CPU cores. People are supposed to launch several Redis instances to scale out on several cores if needed.” For our use-case of online feature stores, it is decidedly non-trivial to partition a feature store across multiple redis instances. Therefore, commercial vendors encourage users to pay for their distributions that introduce a new multithreaded architecture to Redis. We now chronicle the three different generations of threading architectures underlying RonDB, from its NDB roots, and how they compare to Redis’ journey to date. Open-source Redis has practically the same thread architecture implemented as in the first version of NDB Cluster from the 1990s, when NDB was purely an in-memory database. The commercial Redis distributions have sinced developed multithreaded architectures very similar to the second thread architecture used by later versions of NDB cluster. However, RonDB has evolved into a third version of the NDB cluster thread architecture. Let’s dive into the details of these 3 generations of threading architectures. The first generation thread architecture is extremely simple - everything is implemented in one thread. This means that this thread handles receive on the socket, handling transactions, handling the database operation, and finally sending the response. This works better with fast CPUs with large caches - more Intel, less AMD. Still, handling all of this in one thread is efficient and my experiments on a 32-core Lenovo P620 workstation, we can see that RonDB reached 1.1M reads per second and Redis a bit more than 800k reads per second in a single thread with a latency of around 25 microseconds. This test used batch reads with 50 reads per batch and had 16 threads issuing batches in parallel. For INSERT, UPDATE or DELETE operations, both RonDB and Redis reached around 600k writes per second, with Redis performing SET operations (it does not have a SQL API). This benchmark was done with a setup using two replicas in both RonDB and in Redis. Around 2008, the trend towards an increased number of CPUs per socket accelerated and today a modern cloud server can have more than 100 CPUs. With NDB, we decided it was necessary to redesign its multithreading architecture to scale with the available number of CPUs. This resulted in our second generation thread architecture, where we partitioned database tables and let each partition (shard) be managed by a separate thread. This takes care of the database operation. However, as NDB is a transactional distributed database, it was also necessary to design a multithreaded architecture for socket receive, socket send, and transaction handling. These are harder to scale-out on multi-core hardware. Both NDB and the commercial solutions of Redis implement their multithreaded architecture using message passing. However, the commercial Redis solution uses Unix sockets for communication, whereas NDB (and RonDB) send messages through memory in a lock-free communication scheme. RonDB has dedicated threads to handle socket receive, socket send, and transaction handling. To further reduce latency, network send is adaptively integrated into transaction handling threads. RonDB has, however, taken even more steps in its multithreaded architecture compared to commercial Redis solutions. RonDB now has a new third generation thread architecture that improves the scalability of the database - it is now possible for more than one thread to perform concurrent reads on a partition. RonDB will ensure that all these read-only threads that can read a partition all share the same L3 cache to avoid any scalability costs. This ensures that hotspots can be handled by multiple threads in RonDB. This architectural advancement decreases the need to partition the database into an increasingly larger number of partitions, and prevents problems such as over-provisioning due to hotspots in DynamoDB. The introduction of read-only threads is important for applications that also need to perform range scans on indexes that are not part of the partition key. It also enables the key-value store to handle secondary index scans and complex SQL queries in the background, while concurrently handling primary key operations (reads/writes for the online Feature Store). Finally, the third generation thread architecture enables fine-grained elasticity; it is easy to stop a database node and bring it up again with a higher or lower number of CPUs, thus making it easy to increase database throughput or decrease its cost of operation. Hotspots can be mitigated with read-only threads without resorting to over-provisioning. RonDB also has coarse-grained elasticity, with the ability to add new database nodes without affecting its operation, known as online add node. There are numerous advantages of RonDB compared to Redis; the higher availability and the ability to handle much larger data sets are the most obvious ones. Both are open-source, but Redis has the advantage of a reduced operational burden as it is not at its core a distributed system. However, in the recent era of managed databases in the cloud, operational overhead is no longer a deciding factor when choosing your database. With RonDB’s appearance as a managed database in the cloud, the operational advantages of Redis diminish, paving the way for RonDB to gain adoption as the highest performance online feature store in the cloud.
https://www.logicalclocks.com/content/blog?b28a2c57_page=2
CC-MAIN-2022-40
refinedweb
9,559
50.26
DBIx::Class::Migration::Tutorial::Setup - Bootstrap your project files By the end of this section, you should be able to bootstrap a standard Perl project and have a simple DBIx::Class project installed. You should then be able to proceed to creating some migrations. We make minimal use of Dist::Zilla to create a basic project skeleton. I realize Dist::Zilla is not everyone's favorite tool for this purpose, but I am choosing it for the purpose of the tutorial because its simple enough for my basic need. If you know enough to not like Dist::Zilla then I will assume you know enough to perform this next task using some other tool. Most Perl projects use commandline tools, rather than a GUI IDE. You should open a terminal with a commandline interface of choice, change to a directory where you like to store project files (such as $HOME or $HOME/Desktop) and proceed. cpanm Dist::Zilla dzil new MusicBase cd MusicBase You should now have a directory structure like this: /MusicBase dist.ini /lib MusicBase.pm We will use the dist.ini file to manage our project dependencies. Open dist.ini in your editor of choice and alter it to look like this: name = MusicBase Set author and Next open the file lib/MusicBase.pm in your text editor and change it as follows: package MusicBase; our $VERSION = '0.001'; 1; =head1 NAME MusicBase - The DBIx::Class::Migration tutorial application =head1 AUTHOR John Napiorkowski L<email:jjnapiork@cpan.org> =head1 SEE ALSO L<DBIx::Class::Migration> =head1 COPYRIGHT & LICENSE Copyright 2012, John Napiorkowski L<email:jjnapiork@cpan.org> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =cut As you can see, this is just a stub of POD to help give someone general context. Going forward, when I say 'do something in your project home directory', I mean inside this new MusicBase directory which you just made. Congrats, you have a basic Perl project setup! If you are using a source control tool, you should probably commit. You listed a few dependencies in the dist.ini file you modified above. Let's get those installed now: dzil listdeps | cpanm Managing your project dependencies via your dist.ini file (or Makefile.PL if you prefer) is considered a community approved best practice. In this section you'll setup a first version of the MusicBase DBIC files. The working application we are going to design is called MusicBase which is an application that tracks what Artists have made what CDs and which Tracks are part of which CD. Here's the general model: -- An Artist has zero or more Cds. -- Each Cd belongs to a single Artist -- Each Cd has zero or more Tracks (or songs you can listen to) -- A Track can belong to only one Cd. Additionally you need to store the Artists name, and the titles for both the Cds and the Tracks. You also have some business logic that requires you to return the set of Artists that have more than one Cd published. Let's model that! From your application home directory (the directory that contains your dist.ini file) perform the following commands: mkdir lib/MusicBase mkdir lib/MusicBase/Schema mkdir lib/MusicBase/Schema/Result mkdir lib/MusicBase/Schema/ResultSet touch lib/MusicBase/Schema.pm touch lib/MusicBase/Schema/Result/Artist.pm touch lib/MusicBase/Schema/Result/Cd.pm touch lib/MusicBase/Schema/Result/Track.pm touch lib/MusicBase/Schema/ResultSet/Artist.pm You'll now have a standard DBIx::Class directory structure that follows current good practices. Lets add in some code to our file stubs. Change lib/MusicBase/Schema.pm to match the following: package MusicBase::Schema; use strict; use warnings; use base 'DBIx::Class::Schema'; our $VERSION = 1; __PACKAGE__->load_namespaces; 1; Change lib/MusicBase/Schema/Result/Artist.pm package MusicBase::Schema::Result::Artist; use strict; use warnings; use base 'DBIx::Class::Core'; __PACKAGE__->table('artist'); __PACKAGE__->add_columns( 'artist_id' => { data_type => 'integer', }, 'name' => { data_type => 'varchar', size => '96', }); __PACKAGE__->set_primary_key('artist_id'); __PACKAGE__->has_many( 'cd_rs' => 'MusicBase::Schema::Result::Cd', {'foreign.artist_fk'=>'self.artist_id'}); 1; Change lib/MusicBase/Schema/Result/Cd.pm package MusicBase::Schema::Result::Cd; use strict; use warnings; use base qw/DBIx::Class::Core/; __PACKAGE__->table('cd'); __PACKAGE__->add_columns( 'cd_id' => { data_type => 'integer', }, 'artist_fk' => { data_type => 'integer', }, 'title' => { data_type => 'varchar', size => '96', }); __PACKAGE__->set_primary_key('cd_id'); __PACKAGE__->belongs_to( 'artist' => 'MusicBase::Schema::Result::Artist', {'foreign.artist_id'=>'self.artist_fk'}); __PACKAGE__->has_many( 'track_rs' => 'MusicBase::Schema::Result::Track', {'foreign.cd_fk'=>'self.cd_id'}); 1; Change lib/MusicBase/Schema/Result/Track.pm package MusicBase::Schema::Result::Track; use strict; use warnings; use base qw/DBIx::Class::Core/; __PACKAGE__->table('track'); __PACKAGE__->add_columns( 'track_id' => { data_type => 'integer', }, 'cd_fk' => { data_type => 'integer', }, 'title' => { data_type => 'varchar', size => '96', }); __PACKAGE__->set_primary_key('track_id'); __PACKAGE__->belongs_to( 'cd' => "MusicBase::Schema::Result::Cd", {'foreign.cd_id'=>'self.cd_fk'}); 1; Change; That completes creating your basic DBIx::Class structure. You should create a basic test case just to make sure you didn't make any serious errors or forgot something while creating the files. mkdir t touch t/use.t Change t/use.t as follows #!/usr/bin/env perl use Test::Most tests=>1; BEGIN { use_ok 'MusicBase::Schema'; } Then run your test case. prove -lv t/use.t You should expect the one test to pass. If it fails, please review your classes since you probably introduced a typo or syntax error. If your tests pass, that's great you've completed the first part of the tutorial! You did a lot of cut and paste this step, I promise things will be more interesting later on. However, you did all the main grunt work that it takes to get going on a well formed Perl project. At this point you have a DBIC application that you'd actually be able to use. Proceed to DBIx::Class::Migration::Tutorial::FirstMigration. See DBIx::Class::Migration for author information See DBIx::Class::Migration for copyright and license information
http://search.cpan.org/~jjnapiork/DBIx-Class-Migration-0.033/lib/DBIx/Class/Migration/Tutorial/Setup.pod
CC-MAIN-2014-15
refinedweb
995
58.18
- Sort Posts - 10 replies - Last post Hi ksb24, the code you are referring below was working in 1.5, but is now obsolete in 2.0, that's why you get the errors. You can either try to upgrade it (see migratio guide: ) or you can more simply try to retrieve a Trackable just by using the Unity function GameObject.Find( "MyTrackableName" ); The specific approach is also depending on your application logic. I hope this helps. Hello David, Thank you for sharing this code. I want to use this to load multiple models with a gui texture button. I am new to using C# and unable to resolve the errors. I get the following errors when I save your code in a script file. Type `ImageTracker' does not contain a definition for `GetActiveDataSet' and no extension method `GetActiveDataSet' of type `ImageTracker' could be found (are you missing a using directive or an assembly reference?) Type `DataSet' does not contain a definition for `GetNumTrackables' and no extension method `GetNumTrackables' of type `DataSet' could be found (are you missing a using directive or an assembly reference?) How do I resolve these errors by specifying the right directive? I thought of using DataSetTrackableBehaviour class but I do not know how to do it in C#. I created the entire app with separate levels but then it loads very slow each time when changing levels. Hence I am trying to load and unload each model with this method. Other option of preloading all 20+ models at scene startup may slow down scene loading so I can not use it. Please guide me. Thank you. Note that the significant step is the assignment of the obj.transform.parent. You don't need to reference the trackable using the dataset, you can simply obtain it as a gameobject. Here's how to instantiate custom models in Unity - Another approach is to pre-load the models and enable / disable their rendering at runtime. The DefaultTrackableEventHandler demonstrates this approach and shows you how to control their colliders as well. January 30, 2012 Hi, there are many ways of doing that; one possible way is to attach a script to the AR camera, which do a model swaps based on some input events; in general, the previous models attached as children of a trackable can be disabled, while the new model can be parented under the same trackable; This is an example of a script that replaces any pre-existing model with a simple cube (the swap here is activated by a simple touch on the screen, but this is just for simplicity of the example); you can use this sample as a starting point and elaborate more sophisticated solutions on your own: public class ModelSwapper : MonoBehaviour { void Start () { } // Update is called once per frame void Update () { if (Input.GetMouseButtonUp(0)) { SwapModel(); } } private void SwapModel() { ImageTracker tracker = (ImageTracker)TrackerManager.Instance.GetTracker(Tracker.Type.IMAGE_TRACKER); if (tracker == null) return; DataSet activeDataSet = tracker.GetActiveDataSet(); if (activeDataSet != null && activeDataSet.GetNumTrackables() > 0) { DataSetTrackableBehaviour trackable = activeDataSet.GetTrackable(0); GameObject trackableGameObject = trackable.gameObject; //disable previous children (old 3d models) for (int i = 0; i < trackableGameObject.transform.GetChildCount(); i++) { Transform child = trackableGameObject.transform.GetChild(i); child.gameObject.active = false; } //Create cube object and parent it to trackable gameObject GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube); cube.transform.parent = trackable.transform; cube.transform.localPosition = new Vector3(0,0.2f,0); cube.transform.localRotation = Quaternion.identity; cube.transform.localScale = new Vector3(0.1f, 0.1f, 0.1f); cube.active = true; } } } Delete Message Are you sure you want to delete this message? Delete Conversation Are you sure you want to delete this conversation? Hi, How can I do this without using Unity? I mean is it possible to achieve the same from android. I am developing using Eclipse.. Thank You, Sushil
https://developer.vuforia.com/forum/unity-extension-technical-discussion/replace-3d-model-runtime
CC-MAIN-2020-29
refinedweb
631
56.45
Sunday 3 October 2004 In C++, if you want your code to talk about itself, you often use the predefined magic macros __FILE__ and __LINE__ to get the filename and line number of the current line: // Use this macro if you can't write the code yet.#define NOTYET() NoCodeYet(__FILE__, __LINE__);void NoCodeYet(const char * pszFile, int nLine){ fprintf(stderr, "No code yet at %s(%d)\n", pszFile, nLine);}//...void ComplicatedFunctionFromTheFuture(){ NOTYET() // I'll get to this later.} This provides a convenient way to leave breadcrumbs that will direct you to the spot in the code later. How to do it in Python? With help from the Python Cookbook, I created this. It uses scary functions from sys (_getframe has a leading underscore and is described as "for internal and specialized uses only"): def _functionId(nFramesUp): """ Create a string naming the function n frames up on the stack. """ co = sys._getframe(nFramesUp+1).f_code return "%s (%s @ %d)" % (co.co_name, co.co_filename, co.co_firstlineno)def notYetImplemented(): """ Call this function to indicate that a method isn't implemented yet. """ raise Exception("Not yet implemented: %s" % _functionId(1))#...def complicatedFunctionFromTheFuture(): notYetImplemented() This goes one further than the C++ technique, by providing the function name as well as the file and line. I've used the following code. . Using one less f_back, you see where the routines was called from. . . Using 2 f_back's, and then using 1: >>> lineheretest.linehere() "@[file: ...ework\\interact.py, line 257 in 'runcode']" >>> lineheretest.linehere() "@[file: , line 1 in '?']" note: it might also be usefull to some people to use a similar trick to get the callers caller's variables: def parentVars(): frame = inspect.currentframe().f_back.f_back return frame.f_globals, frame.f_locals import inspect def linehere(): """Give linenumer, file, and functionname of the callers, caller.. Uses the standard module inspect """ info = inspect.getframeinfo(inspect.currentframe().f_back.f_back)[0:3] printInfo=[] # Break long filenames if len(info[0])>20: printInfo.append('...'+info[0][-17:]) else: printInfo.append(info[0]) printInfo.extend(info[1:3]) return '@[file: %s, line %s in %r]'% tuple(printInfo) Get the source with correct indention by running: import base64,zlib;f=file("lineheretest.py",'wb');f.write(zlib.decompress(base64.decodestring(""" eNp9UcFKxDAQvS/sPwyV0hRr6K4HobAiXsQP8FSKxHayGzadlkkqfr5pm13RgzkN7715b2Zi+nFg D4bciK3fbjrUYA3hCRlFXm03EF6SJC/mExeCph65AG0sFqCoAz1R681ApHqEQYM/IbTKWmRXQHyr zYpKCW8O3aJzPjgo7qAfusnizxgxdS0M6QEOF1Ie0WsOYTMsLmA7MSOthMilfv9Q7Tmvy+q+WU1G NuRfQ8uhjsgNPDOqM9iBjss+8wYuRoYrIIk5oi6b/HFfxlP8spJqHJE6kUkps9soru92D1WTr3K0 Dv/rvAT8mVHil7/Su7BDFDD6iQmyp3qet4I0nHj+lFCE80DKTZaCn0aL4uoVWr8BAcCTEQ== """)));f.close() (source to create this data can be found on: gwork.blogspot.com Why not just raise NotImplementedError? The stack trace will show the file, line and function with the error. In this case, I could have simply raised an exception. I used notYetImplemented() as an illustration of getting the file and line of the current line. It can be used in other ways that don't lend themselves to raising exceptions. This may or may not be of any use to you, but many C++ compilers do make the current function name available via __FILE__ and __LINE__-like macros. For example, MSVC defines __FUNCSIG__ that will print out the current function. Boost provides a handy macro called BOOST_CURRENT_FUNCTION that is mapped to the appropriate macro based on your current compiler. The macro is #defined in boost\current_function.hpp. I hope this helps someone out there! In addition to inspect -- which is very complete -- you can also use the warnings module. There's a "stack" argument, which says how far back in the stack you should go to find the bad code. So in your case, you might go back two steps -- one to get out of your generic warning wrapper, and another to get out of the not-yet-implemented function to the function's caller; this can be even better, because you probably know that the function isn't implemented, but you may be more concerned about the function that called the not-yet-implemented function (or maybe it's a poorly-implemented function, or a deprecated function). Also, you can use the traceback module to print a traceback, without actually raising an exception. Hello: I saw the code and i thought that it may be very useful to me for finishing a program that i'm making. But i have a question with the _getframe() function. If i want to get the filename of the current program that it's running, i.e. the path of the self program; which is the argument for _getframe() (in this case nFramesUp+1) that i must input for this purpose? Beforehand, thank you very much. 2004, Ned Batchelder
http://nedbatchelder.com/blog/200410/file_and_line_in_python.html
crawl-002
refinedweb
742
56.96
LES juxtaposition idea Out of date: some syntactic elements have changed since this was written; see the new plan. Last update: June 14, 2016 This document explores a potential new version LES with a different set of syntactic sugars than LESv2 has. It is more complex than LESv2, but it satisfies two desires that LESv2 does not: “I want no separate ‘ (’ and ‘ (’ tokens.”, and “I want to write x = new Foo() or i32.reinterpret_f32 $N without parentheses To satisfy these desires, we could throw out the concept of superexpressions and start fresh. From now on, assume ‘ (’ and ‘ (’ are now always equivalent. The juxtaposition operator To satisfy the second desire, we could define a Haskell-inspired “juxtaposition” operator. This operator would have a very high precedence, but not as high as .. In Haskell, foo x y is the idiomatic equivalent of foo(x, y), although it literally means (foo(x))(y). I suppose foo x y could mean (foo(x))(y) or foo(x(y)) or even foo(x, y), depending on how the parser is designed. So let’s consider the scenarios: i32.reinterpret_f32 $Nshould mean (i32.reinterpret_f32)($N). It does. - Ideally, x = new Foo()would mean x = (new(Foo()))- but since we’re eliminating whitespace sensitivity, new Foo ()would mean the same thing, which implies it wouldn’t work quite the same as Haskell’s juxtaposition operator - not necessarily because the operator itself is any different, but because Haskell doesn’t have a separate “call-with-parens” operator. By the way, note that x(y)(z)must continue to mean (x(y))(z), since that’s what it means in all mainstream languages. x y zcould mean (x(y))(z)(left associative) or x(y(z))(right associative). The first interpretation follows what Haskell does, while the second is the logical interpretation if we want to treat xand yas unary operators (as in log sqrt $Nor not isHappy camper)… and that seems to be what we want here. To make this work, we could introduce the juxtaposition operator with a precedence somewhere between ** and ::. This operator would work as a backup to the normal function-call operator, and would only apply when the normal function operator doesn’t. That is, if you write f (a.b).c, it’s a normal function call parsed as (f(a.b)).c. However, if you write f a.b.c, it’s a juxtaposition parsed as f((a.b).c). A related case is a [i], which would be an indexing operation; you would have to write a([i]) instead in order to pass a list [i] to a. Examples: Juxtaposition would have the curious effect of making (int) x (the cast operator of C languages) equivalent to int(x), which is an ordinary call but also acts as a cast operator in other languages including C++. So that seems … okay. Block calls Already, juxtaposition allows expressions like if {...} else {...}; unfortunately, such an expression would have the strange tree structure if( ({...})(else({...})) ). That’s weird, so we should add a syntactic rule for braces {...} to get the kind of tree we want. So let’s add braces {...} as an optional add-on to the ordinary method call syntax, so that foo (x, y) {z} means foo(x, y, {z}). Also, we’ll need a “continuator” feature so that if (c) {a} else {b} gives us a reasonable tree, maybe if(c, {a}, else, {b}) (like LESv2) or if(c, {a}, else({b})). To be clear, this feature would be completely independent from the juxtaposition operator. A continuator could perhaps be defined as a word from a pre-set list, followed by something in parentheses and/or braces, to be added as an additional argument to the original call. So our expression grammar with juxtaposition plus “block calls” (braced blocks added to calls) might contain rules that look roughly like this: // Here "$" represents _all_ of the maximum-precedence unary operators Particle : Identifier | Literal | Parentheses | BracedBlock | "$" Particle; // Here "." represents _all_ binary operators at the primary level PrimaryExpr: Particle [ "." Particle | CallArgs ]*; CallArgs : ArgList [BracedBlock Continuator*]? ; BracedBlock: "{" StatementList "}"; Parentheses: "(" ExpressionList ")"; Continuator : ContinuatorKeyword (BracedBlock | ArgList BracedBlock?); ContinuatorKeyword : "else" | "catch" | "except" | "finally" | /* more continuators TBD */; Juxtaposition: PrimaryExpr Juxtaposition; To reduce potential confusion, we could restrict the left-hand side of the juxtaposition to be a simple identifier or particle: Juxtaposition: Identifier Juxtaposition; Juxtaposition: Particle Juxtaposition; Then confusing expressions like if (c) {a} else {b} foo would be illegal, and (Foo) x would be illegal if we picked the first version of the rule. However, this restriction would also prohibit things like f32.sqrt which, due to the . operator, are not particles. It would be possible to carve out an exception to allow that, though. The result is nice, because it allows C-like expressions such as switch (y) {...};, meaning switch(y, {...}); if (x) {...} else {...};, which could mean if(x, {...}, else, {...});or if(x, {...}, else({...}));. You could also write this: x = switch (y) { 0 => "zero"; 1 => "one"; }; which was illegal in LESv2. Unfortunately, the design as described produces this odd syntax tree for try: To solve this, we could treat {...} as an alternative to the ordinary call syntax, by modifying the CallArgs rule above: CallArgs : (ArgList [BracedBlock Continuator*]? | BracedBlock Continuator*) ; So now we have And if arbitrary continuators were allowed, you could also write do {...} while (condition). I’m not sure yet if that’s a good idea. Downsides The first major downside to this design is that statements like var x = z * z; return x + y; Change their meaning to (var(x)) = z * z; // in LESv2, it was `var(x = (z*z))` (return(x)) + y; // in LESv2, it was `return(x + y)` I think (var(x)) = z * z is actually a perfectly reasonable syntax tree for a variable declaration; who says initialization has to be part of variable creation? The return statement, however, has become nonsensical. It’s not a bad sacrifice, I think: writing return (z*z) isn’t that onerous, and for programmers that accidentally write return x + y it would be possible to give a helpful error message. Perhaps we could add a special parsing rule analogous to the Haskell binary $ operator (in Haskell, return $ x + y means return (x + y), although it’s a feature of the standard library, not the parser.) The worst problem IMO is what happens to function and type declarations: Unlike the current version of LES, which associates the braces with fn or class in these examples (as it should), this hypothetical LES associates braces with the return value, the base class, or whatever else happens to be located on the right side. Yuck. Keyword statements Luckily, there is one more thing we could add to make this work: we could introduce a “keyword” concept, in which keywords are explicitly marked by adding # on the front. These keywords would appear at the start of a new type of “superexpression” that works similarly to superexpressions in LESv2. This gives us a new way to support a return statement: Here, the @ simply indicates that identifiers like #fn are not to be treated as keywords. In LESv2, # is an ordinary identifier character, treated no differently than _ or a letter, but by convention it is used to represent keywords in Loyc trees. So it’s a very appropriate choice to use # here to denote a “keyword statement”. But # is a “heavy-looking” character - it draws attention to itself, and is also clumsy to write on a whiteboard. Perhaps a lighter alternative is better? We could use a single quote, because a single quote is normally used for character literals - but by definition a character is only one character, whereas every keyword I’ve ever seen is at least two characters, so there’s no ambiguity. Parsing this requires a little hack: the expression parser needs to be told it’s in “superexpression” mode so that it can stop at the braced block, which it would ordinarily consume. The grammar would be something like this: Superexpression : Keyword ExpressionWithoutBraces? BracesWithContinuators?; BracesWithContinuators : BracedBlock Continuator*; Continuator : ContinuatorKeyword (BracedBlock | Parentheses BracedBlock?); BracedBlock: "{" StatementList "}"; Parentheses: "(" ExpressionList ")"; Continuator : "else" | "catch" | "except" | "finally" | /* more continuators TBD */; Should there be a keyword list? Unfortunately, the # (or ') would make LES look different from most C-like languages. If it’s super important for the code to look “natural”, we could introduce a set of keywords to cover the most common cases where this syntax would be needed: fn function proc property struct class enum interface type data template trait alias namespace - with two function keywords to cover the Javascript and Rust camps, and perhaps proc for good measure. These keywords would be treated as if they started with #/ ', and ordinary identifiers with those names could be specified with @, i.e. class means #class or 'class, and @class means class. We could also add a few others like return import using case throw, but we have to draw the line somewhere since it’s impossible to specify a set with enough stuff to satisfy everyone. Operators with letters in them If we use ' to identify keywords, we could also use it to identify binary operators with letters in them. I would propose in fact that all binary operators include a single quote in the Loyc tree, as I discussed in March. Then WebAssembly could have its signed and unsigned operators at the cost of one extra character: $x = $y '>s $z // signed comparison $x = $y '>u $z // unsigned comparison Or with the letter first: $x = $y 's> $z // signed comparison $x = $y 'u> $z // unsigned comparison And it would be a reasonable way to write “word operators”: Dinner = pizza 'with anchovies 'and stuff; However, using ' this way has a small price: it requires a space before the ' as well as after, because ' is also a legal character in identifiers (an idea taken from Haskell). Therefore, the existing syntax with backticks can actually be more compact: Dinner = pizza`with`anchovies`and`stuff; On the whole I like this idea, because the ' could be defined as the way to identify operators and keywords in the syntax tree. If ' in the syntax corresponds to a ' in the AST, it makes LES easier to learn and understand. Minor points A problem that re-emerges in this proposal is that you need a semicolon at the end of your block-call expressions like switch (x) {...};. To fix this, we could specify that if the leftmost expression is a call expression that ends in a braced block, no semicolon is needed. I’m a bit concerned about the complexity of implementing such a rule, but we won’t need it if we use newlines as the main terminator instead of semicolons. If we’re going to define keywords, then certainly we should define null, false and true (rather than using @null, @false and @true as in LESv2). In LESv2 with superexpressions, there was an ambiguity in A - B between the normal infix interpretation and the superexpression interpretation A (-B). Perhaps this is why Haskell only has a single prefix operator in total - to limit the probability that someone would erroneously write things like f !x or f ~x, expecting their punctuation to be treated as a prefix operator. However, I don’t think our new juxtaposition operator has an ambiguity related to this, nor does Haskell (because the precedence of the RHS of the juxtaposition excludes prefix operators). Currently in LES, (a; b) is a tuple. Arguably, f (a.b; c) should be illegal because it’s not clear if this was meant to be a normal function call or a juxtaposition-call with one parameter that happens to be a tuple. Conclusion At first I didn’t like this plan, but now I’m relatively happy with it. It adds significant complexity, but also significant value.
http://loyc.net/les/juxtaposition-discussion.html
CC-MAIN-2019-26
refinedweb
1,967
50.46
special_snowflake 0.0.8 Find the unique indices for a dataset Because datasets are often provided with scant metadata, I want to infer some of the conventional metadata without depending on special information. One such sort of metadata is the schema of the dataset. Special snowflake looks for unique identifiers in arbitrary datasets. Run it like so. $ snowflake bus_stops.csv route.number, stop.id, n.students time, n.students, location route.name, n.students, location route.name, stop.id route.number, stop.id, time By default, you get all of the combinations of up to three columns inside bus_stops.csv that function as unique indices on the full spreadsheet. Or call it from Python! from special_snowflake import fromcsv from pprint import pprint with open('open-data-index.csv') as fp: pprint(fromcsv(csv.DictReader(fp), n_columns = 2, only_adjacent = False)) This program finds all of the combinations of one or two columns inside open-data-index.csv that function as unique indices on the full spreadsheet. - Downloads (All Versions): - 19 downloads in the last day - 125 downloads in the last week - 632 downloads in the last month - Author: Thomas Levine - License: AGPL - Categories - Package Index Owner: tlevine - DOAP record: special_snowflake-0.0.8.xml
https://pypi.python.org/pypi/special_snowflake/0.0.8
CC-MAIN-2016-07
refinedweb
203
68.16
Photon OLED Shield Hookup Guide Introduction Want your Photon projects to display sensor readings, play pong, or draw little doodles? The Photon OLED Shield might be the perfect fit, and we're going to show you how to use it. If the OLED screen in the picture above looks familiar, it's probably because we use the same component in our Microview and Micro OLED Breakout products, as well as the OLED Block for the Edison. We love it for it's combination of small footprint yet surprisingly clear graphics -- the screen itself is 0.66" across, with a display area measuring 64 pixels wide by 48 pixels tall. Covered in this Tutorial This tutorial will cover the functionality of the OLED shield, how to hook it up in your project, and how to program with it using the SparkFun Micro OLED Library. Required Materials All you need to get started with the Photon OLED Shield is a Photon, a micro-USB cable, and the OLED shield. You'll also want to sign up for an account on particle.io and register your photon. Instructions on how to do this can be found at docs.particle.io. Particle Photon (Headers)WRL-13774 SparkFun Photon Micro OLED ShieldDEV-13628 Suggested Reading - Serial Peripheral Interface (SPI) -- SPI is the preferred method of communication with the display. - I2C -- Alternatively, I2C can be used to control the display. It uses less wires, but is quite a bit slower. OLED Shield Overview Pin Descriptions Since the shield does all of the work for you, there's no need to actually wire these connections - but in case you're looking at datasheets, or code for the Microview or OLED breakout, this table will give you a clue as to what the shield is doing. As always, you can check the schematic for more info. Setting the Jumpers With the board flipped over, you'll notice there are six. That brief overview should cover the 99% use case. Consult the schematic and the notes therein if you have any questions about jumpers or pins. Using the OLED Shield When attaching your Photon to the top of the OLED shield, make sure the beveled end of your Photon (next to A0 and D0) matches up with the beveled lines on the top of the OLED shield (the end with the Open Source Hardware Logo). The pin labels on the Photon should match those on the OLED shield as well. You can stack many of our Photon shields together, which is why the OLED screen juts out to the side. So, you can end up with something like this: Using the Particle OLED Library Great, now that we understand the hardware setup, let's put some code on this thing and see what it can do. Using the Particle library we've written, you'll be able to write text, draw lines and shapes, and generally display anything that'll fit on the screen. Getting the Particle OLED Library For this page we'll be using the online Particle environment. If you're using the Particle Dev environment instead, you can get the library and code examples from the GitHub repository. Load the Demo Example If you haven't created a Particle user account and claimed your board, you'll need to do that now. Starting here is a great idea if you're having trouble. Once you're logged into build.particle.io and have a device selected (all this is covered at the link above), you'll want to click on the create new app button in the sidebar -- it's big and blue, you can't miss it. Call your app something like 'OLED_test'. Next -- this is the important part -- we include the SparkFunMicroOLED library. To do this: - Click on the icon that looks like a bookmark (it's all the way to the left on the black skinny sidebar, 4th up from the bottom) - In the text box under 'community libraries', search for 'OLED' and you'll see 'SparkFunMicroOLED' come up (though it might be cut off a little bit, don't worry). It should look something like this: - Click on the library name, and a bunch of stuff will pop up, including all the library files as well as a few options of what to do with the library. - In this case, we just want to use the library in our app, so click on the 'include in app' button. - This will lead you to list of all your apps - click on the name of the app you just created, and you should see a statement like #include "SparkFunMicroOLED/SparkFunMicroOLED.h"at the top of your app. - Last thing is to add the math library to our sketch - on the line below the first #includestatement, type in: #include "math.h" Now that we've included the library in our app, let's give it some code - just copy the demo code below and paste it into your app, below the include statements. language:cpp /* Micro-OLED-Shield-Example.ino SparkFun Micro OLED Library Hello World Example Jim Lindblom @ SparkFun Electronics Original Creation Date: June 22, 2015 This sketch prints a friendly, recognizable logo on the OLED Shield, then goes on to demo the Micro OLED library's functionality drawing pixels, lines, shapes, and text. Hardware Connections: This sketch was written specifically for the Photon Micro OLED Shield, which does all the wiring for you. If you have a Micro OLED breakout, use the following hardware setup: MicroOLED ------------- Photon GND ------------------- GND VDD ------------------- 3.3V (VCC) D1/MOSI ----------------- A5 (don't change) D0/SCK ------------------ A3 (don't change) D2 D/C ------------------- D6 (can be any digital pin) RST ------------------- D7 (can be any digital pin) CS ------------------- A2 (can be any digital pin) Development environment specifics: IDE: Particle Build Hardware Platform: Particle Photon SparkFun Photon Micro OLED Shield This code is beerware; if you see me (or any other SparkFun employee) at the local, and you've found our code helpful, please buy us a round! Distributed as-is; no warranty is given. */ ////////////////////////////////// // MicroOLED Object Declaration // ////////////////////////////////// // Declare a MicroOLED object. If no parameters are supplied, default pins are // used, which will work for the Photon Micro OLED Shield (RST=D7, DC=D6, CS=A2))); } void loop() { pixelExample(); // Run the pixel example function lineExample(); // Then the line example function shapeExample(); // Then the shape example textExamples(); // Finally the text example } void pixelExample() { printTitle("Pixels", 1); for (int i=0; i<512; i++) { oled.pixel(random(oled.getLCDWidth()), random(oled.getLCDHeight())); oled.display(); } } void lineExample() { int middleX = oled.getLCDWidth() / 2; int middleY = oled.getLCDHeight() / 2; int xEnd, yEnd; int lineWidth = min(middleX, middleY); printTitle("Lines!", 1); for (int i=0; i<3; i++) { for (int deg=0; deg<360; deg+=15) { xEnd = lineWidth * cos(deg * M_PI / 180.0); yEnd = lineWidth * sin(deg * M_PI / 180.0); oled.line(middleX, middleY, middleX + xEnd, middleY + yEnd); oled.display(); delay(10); } for (int deg=0; deg<360; deg+=15) { xEnd = lineWidth * cos(deg * M_PI / 180.0); yEnd = lineWidth * sin(deg * M_PI / 180.0); oled.line(middleX, middleY, middleX + xEnd, middleY + yEnd, BLACK, NORM); oled.display(); delay(10); } } } void shapeExample() { printTitle("Shapes!", 0); // Silly pong demo. It takes a lot of work to fake pong... int paddleW = 3; // Paddle width int paddleH = 15; // Paddle height // Paddle 0 (left) position coordinates int paddle0_Y = (oled.getLCDHeight() / 2) - (paddleH / 2); int paddle0_X = 2; // Paddle 1 (right) position coordinates int paddle1_Y = (oled.getLCDHeight() / 2) - (paddleH / 2); int paddle1_X = oled.getLCDWidth() - 3 - paddleW; int ball_rad = 2; // Ball radius // Ball position coordinates int ball_X = paddle0_X + paddleW + ball_rad; int ball_Y = random(1 + ball_rad, oled.getLCD < oled.getLCD >= (oled.getLCD > oled.getLCDHeight() - 2 - paddleH)) { paddle0Velocity = -paddle0Velocity; } // Change paddle 1's direction if it hit top/bottom if ((paddle1_Y <= 1) || (paddle1_Y > oled.getLCDHeight() - 2 - paddleH)) { paddle1Velocity = -paddle1Velocity; } // Draw the Pong Field oled.clear(PAGE); // Clear the page // Draw an outline of the screen: oled.rect(0, 0, oled.getLCDWidth() - 1, oled.getLCDHeight()); // Draw the center line oled.rectFill(oled.getLCDWidth()/2 - 1, 0, 2, oled.getLCDHeight()); // Draw the Paddles: oled.rectFill(paddle0_X, paddle0_Y, paddleW, paddleH); oled.rectFill(paddle1_X, paddle1_Y, paddleW, paddleH); // Draw the ball: oled.circle(ball_X, ball_Y, ball_rad); // Actually draw everything on the screen: oled.display(); delay(25); // Delay for visibility } delay(1000); } void textExamples() { printTitle("Text!", 1); // Demonstrate font 0. 5x8 font oled.clear(PAGE); // Clear the screen oled.setFontType(0); // Set font to type 0 oled.setCursor(0, 0); // Set cursor to top-left // There are 255 possible characters in the font 0 type. // Lets run through all of them and print them out! for (int i=0; i<=255; i++) { // You can write byte values and they'll be mapped to // their ASCII equivalent character. oled.write(i); // Write a byte out as a character oled.display(); // Draw on the screen delay(10); // Wait 10ms // We can only display 60 font 0 characters at a time. // Every 60 characters, pause for a moment. Then clear // the page and start over. if ((i%60 == 0) && (i != 0)) { delay(500); // Delay 500 ms oled.clear(PAGE); // Clear the page oled.setCursor(0, 0); // Set cursor to top-left } } delay(500); // Wait 500ms before next example // Demonstrate font 1. 8x16. Let's use the print function // to display every character defined in this font. oled.setFontType(1); // Set font to type 1 oled.clear(PAGE); // Clear the page oled.setCursor(0, 0); // Set cursor to top-left // Print can be used to print a string to the screen: oled.print(" !\"#$%&'()*+,-./01234"); oled.display(); // Refresh the display delay(1000); // Delay a second and repeat oled.clear(PAGE); oled.setCursor(0, 0); oled.print("56789:;<=>?@ABCDEFGHI"); oled.display(); delay(1000); oled.clear(PAGE); oled.setCursor(0, 0); oled.print("JKLMNOPQRSTUVWXYZ[\\]^"); oled.display(); delay(1000); oled.clear(PAGE); oled.setCursor(0, 0); oled.print("_`abcdefghijklmnopqrs"); oled.display(); delay(1000); oled.clear(PAGE); oled.setCursor(0, 0); oled.print("tuvwxyz{|}~"); oled.display(); delay(1000); // Demonstrate font 2. 10x16. Only numbers and '.' are defined. // This font looks like 7-segment displays. // Lets use this big-ish font to display readings from the // analog pins. for (int i=0; i<25; i++) { oled.clear(PAGE); // Clear the display oled.setCursor(0, 0); // Set cursor to top-left oled.setFontType(0); // Smallest font oled.print("A0:"); // Print "A0" oled.setFontType(2); // 7-segment font oled.print(analogRead(A0)); // Print a0 reading oled.setCursor(0, 16); // Set cursor to top-middle-left oled.setFontType(0); // Repeat oled.print("A1:"); oled.setFontType(2); oled.print(analogRead(A1)); oled.setCursor(0, 32); oled.setFontType(0); oled.print("A7:"); oled.setFontType(2); oled.print(analogRead(A7)); oled.display(); delay(100); } // Demonstrate font 3. 12x48. Stopwatch demo. oled.setFontType(3); // Use the biggest font int ms = 0; int s = 0; while (s <= 50) { oled.clear(PAGE); // Clear the display oled.setCursor(0, 0); // Set cursor to top-left if (s < 10) oled.print("00"); // Print "00" if s is 1 digit else if (s < 100) oled.print("0"); // Print "0" if s is 2 digits oled.print(s); // Print s's value oled.print(":"); // Print ":" oled.print(ms); // Print ms value oled.display(); // Draw on the screen ms++; // Increment ms if (ms >= 10) // If ms is >= 10 { ms = 0; // Set ms back to 0 s++; // and increment s } delay(1); } } // Center and print a small title // This function is quick and dirty. Only works for titles one // lineWidth() / 2)); // Print the title: oled.print(title); oled.display(); delay(1500); oled.clear(PAGE); } Now, click the 'flash' button (the one that looks like a lightning bolt) and wait for the magic to begin! Resources and Going Further Here are a few links that should help with any further questions you may have about the Photon OLED Shield: - Photon OLED Shield Github - for schematics and board design files - SparkFun OLED Particle Libarary -- this is where to go for the firmware library and example code - Particle Documentation Pages -- go here to set up and configure your Photon (or other Particle devices) - Particle Community Forum -- anything that you couldn't find in the docs should be easily found in the community forum. If you are having trouble, search this forum first, as many of the answers are there already. Going Further Now that you're comfortable with the Photon OLED Shield and its Particle library, what are you going to make with it? Need some inspiration, check out these related tutorials: Serial Graphic LCD Hookup RGB Panel Hookup Guide Reaction Timer The OLED Shield pairs very well with any of our other Photon Shields; check out our hookup guides for those shields:
https://learn.sparkfun.com/tutorials/photon-oled-shield-hookup-guide
CC-MAIN-2021-39
refinedweb
2,103
65.73
Before we go on to more advanced topics, we need to stop for a quick note on portability issues. Modern versions of the Linux kernel are highly portable, running on numerous. Although most programmers are accustomed to freely using standard types like int and long, writing device drivers requires some care to avoid typing conflicts and obscure bugs. The problem is that you can't use the standard types when you need "a 2-byte filler" or "something representing a 4-byte string," because the normal C data types are not the same size on all architectures. To show the data size of the various C types, the datasize program has been included in the sample files provided on O'Reilly's FTP site in the directory misc-progs. This is a sample run of the program on an i386 system (the last four types shown are introduced in the next section): morgana% misc-progs/datasize arch Size: char short short It's interesting to note that the SPARC 64 architecture runs with a 32-bit user space, so pointers are 32 bits wide there, often, generic memory addresses in the kernel are usually unsigned long, exploiting the fact that pointers and long integers are always the same size, at least on all the platforms currently supported by Linux. For what it's worth, the C99 standard defines the intptr_t and uintptr_t types for an integer variable that can hold a pointer value. These types are almost unused in the 2.6 kernel, however. Sometimes kernel code requires data items of a specific size, perhaps to match predefined binary structures,[1] to communicate with user space, or to align data within structures by inserting "padding" fields (but refer to the Section 11.4.4 for information about alignment issues). support the C99-standard types, such as uint8_t and uint32_t; if portability is a concern, those types can be used in favor of the Linux-specific variety..[2] Linus didn't expect the operating system (OS) he wrote for his own use to become multiplatform; as a result, old structures are sometimes loosely typed.. Note that, in recent times, relatively few new interface-specific types have been defined. Use of the typedef statement has gone out of favor among many kernel developers, who would rather see the real type information used directly in the code, rather than hidden behind a user-defined type. Many older interface-specific types remain in the kernel, however, and they will not be going away anytime soon.. Many _t types are defined, that 9.2.6 in Chapter 9). The loose typing is mainly there for historical reasons, but it can create problems when writing code. For example, one can get into trouble by swapping the arguments to functions like outb; if there were a port_t type, the compiler would find this type of error. can find hints in the header files and in the device drivers distributed with the official kernel. When dealing with time intervals, don't assume that there are 1000 jiffies per second. Although this is currently true for the i386 architecture, not every Linux platform runs at this speed.. When playing games with memory, remember that a memory page is PAGE_SIZE bytes, not 4 KB. Assuming that the page size is 4 KB and hardcoding pages that are 4 KB and larger. The macros are defined in <asm/page.h>; user-space programs can use the getpagesize library function if they ever need the information. Let's look at a nontrivial situation. If a driver needs 16 KB for temporary data, it shouldn't specify an order of 2 to get_free_pages. You need a portable solution. Such a solution, fortunately, has been written by the kernel developers and is called get_order: #include <asm/page.h> int order = get_order(16*1024); buf = get_free_pages(GFP_KERNEL, order); Remember that the argument to get_order must be a power of two. Be careful not to make assumptions about byte ordering. Whereas the PC stores multibyte values low-byte first (little end first, thus little-endian), some high-level platforms work the other way (big-endian). Whenever possible, your code should be written such that it does not care about byte ordering in the data it manipulates. However, sometimes a driver needs to build an integer number out of single bytes or do the opposite, or it must communicate with a device that expects a specific order. The include file <asm/byteorder.h> defines either _ _BIG_ENDIAN or _ _LITTLE_ENDIAN, depending on the processor's byte ordering. When dealing with byte ordering issues,. The last problem worth considering when writing portable code is how to access unaligned data—for example, how to read a 4-byte value stored at an address that isn't a multiple of 4 bytes. i386 users often access unaligned data items, but not all architectures permit it. Many to arrange the fields in unpredictable ways, need filler fields to enforce alignment and ensure portability. Finally, be aware that the compiler may quietly insert padding into structures itself to ensure that every field is aligned for good performance on the target processor. If you are defining a structure that is intended to match a structure expected by a device, this automatic padding may thwart your attempt. The way around this problem is to tell the compiler that the structure must be "packed," with no fillers added. For example, the kernel header file <linux/edd.h> defines several data structures used in interfacing with the x86 BIOS, and it includes the following definition: struct { u16 id; u64 lun; u16 reserved1; u32 reserved2; } _ _attribute_ _ ((packed)) scsi; Without the _ _attribute_ _ ((packed)), the lun field would be preceded by two filler bytes or six if we compile the structure on a 64-bit platform. Many internal kernel functions return a pointer value to the caller. Many of those functions can also fail. In most cases, failure is indicated by returning a NULL pointer value. This technique works, but it is unable to communicate the exact nature of the problem. Some interfaces really need to return an actual error code so that the caller can make the right decision based on what actually went wrong. A number of kernel interfaces return this information by encoding the error code in a pointer value. Such functions must be used with care, since their return value cannot simply be compared against NULL. To help in the creation and use of this sort of interface, a small set of functions has been made available (in <linux/err.h>). A function returning a pointer type can return an error value with: void *ERR_PTR(long error); where error is the usual negative error code. The caller can use IS_ERR to test whether a returned pointer is an error code or not: long IS_ERR(const void *ptr); If you need the actual error code, it can be extracted with: long PTR_ERR(const void *ptr); You should use PTR_ERR only on a value for which IS_ERR returns a true value; any other value is a valid pointer.. When working with the linked list interface, you should always bear in mind that the list functions perform no locking. If there is a possibility that your driver could attempt to perform concurrent operations on the same list, it is your responsibility to implement a locking scheme. The alternatives (corrupted list structures, data loss, kernel panics) tend to be difficult to diagnose. is usually a standalone list_head structure. Figure 11-1 shows how the simple struct list_head is used to maintain a list of data structures. List heads must be initialized prior to use with the INIT_LIST_HEAD macro. A "things to do" list head could be declared and initialized with: struct list_head todo_list; INIT_LIST_HEAD(&todo_list); Alternatively, lists can be initialized at compile time: LIST_HEAD(todo_list); Several functions are defined in <linux/list.h> that work with lists: list_add(struct list_head *new, struct list_head *head); Adds the new entry immediately after the list head—normally at the beginning of the list. Therefore, it can be used to build stacks. Note, however, that the head need not be the nominal head of the list; if you pass a list_head structure that happens to be in the middle of the list somewhere, the new entry goes immediately after it. Since Linux lists are circular, the head of the list is not generally different from any other entry. list_add_tail(struct list_head *new, struct list_head *head); Adds a new entry just before the given list head—at the end of the list, in other words. list_add_tail can, thus, be used to build first-in first-out queues. list_del(struct list_head *entry); list_del_init(struct list_head *entry); The given entry is removed from the list. If the entry might ever be reinserted into another list, you should use list_del_init, which reinitializes the linked list pointers. list_move(struct list_head *entry, struct list_head *head); list_move_tail(struct list_head *entry, struct list_head *head); The given entry is removed from its current list and added to the beginning of head. To put the entry at the end of the new list, use list_move_tail instead. list_empty(struct list_head *head); Returns a nonzero value if the given list is empty. list_splice(struct list_head *list, struct list_head *head); Joins two lists by inserting list immediately after head. The list_head structures are good for implementing a list of like structures, but the invoking program is usually more interested in the larger structures that make up the list as a whole. A macro, list_entry, is provided that maps such) } However, as a general rule, it is better to use one of a set of predefined macros for creating loops that iterate through lists. The previous loop, for example, could be coded as: void todo_add_entry(struct todo_struct *new) { struct list_head *ptr; struct todo_struct *entry; list_for_each(ptr, &todo_list) { entry = list_entry(ptr, struct todo_struct, list); if (entry->priority < new->priority) { list_add_tail(&new->list, ptr); return; } } list_add_tail(&new->list, &todo_struct) } Using the provided macros helps avoid simple programming errors; the developers of these macros have also put some effort into ensuring that they perform well. A few variants exist: list_for_each(struct list_head *cursor, struct list_head *list) This macro creates a for loop that executes once with cursor pointing at each successive entry in the list. Be careful about changing the list while iterating through it. list_for_each_prev(struct list_head *cursor, struct list_head *list) This version iterates backward through the list. list_for_each_safe(struct list_head *cursor, struct list_head *next, struct list_head *list) If your loop may delete entries in the list, use this version. It simply stores the next entry in the list in next at the beginning of the loop, so it does not get confused if the entry pointed to by cursor is deleted. list_for_each_entry(type *cursor, struct list_head *list, member) list_for_each_entry_safe(type *cursor, type *next, struct list_head *list, member) These macros ease the process of dealing with a list containing a given type of structure. Here, cursor is a pointer to the containing structure type, and member is the name of the list_head structure within the containing structure. With these macros, there is no need to put list_entry calls inside the loop. If you look inside <linux/list.h>, you see some additional declarations. The hlist type is a doubly linked list with a separate, single-pointer list head type; it is often used for creation of hash tables and similar structures. There are also macros for iterating through both types of lists that are intended to work with the read-copy-update mechanism (described in Section 5.7.5 in Chapter 5). These primitives are unlikely to be useful in device drivers; see the header file if you would like more information on how they work. The following symbols were introduced in this chapter: #include <linux/types.h> typedef u8; typedef u16; typedef u32; typedef u64; Types guaranteed to be 8-, 16-, 32-, and 64-bit unsigned integer values. The equivalent signed types exist as well. In user space, you can refer to the types as _ _u8, _ _u16, and so forth. #include <asm/page.h> PAGE_SIZE PAGE_SHIFT Symbols that that convert/err.h> void *ERR_PTR(long error); long PTR_ERR(const void *ptr); long IS_ERR(const void *ptr); Functions allow error codes to be returned by functions that return a pointer value. #include <linux/list.h> list_add(struct list_head *new, struct list_head *head); list_add_tail(struct list_head *new, struct list_head *head); list_del(struct list_head *entry); list_del_init(struct list_head *entry); list_empty(struct list_head *head); list_entry(entry, type, member); list_move(struct list_head *entry, struct list_head *head); list_move_tail(struct list_head *entry, struct list_head *head); list_splice(struct list_head *list, struct list_head *head); Functions that manipulate circular, doubly linked lists. list_for_each(struct list_head *cursor, struct list_head *list) list_for_each_prev(struct list_head *cursor, struct list_head *list) list_for_each_safe(struct list_head *cursor, struct list_head *next, struct list_head *list) list_for_each_entry(type *cursor, struct list_head *list, member) list_for_each_entry_safe(type *cursor, type *next struct list_head *list, member) Convenience macros for iterating through linked lists. No credit card required
https://www.oreilly.com/library/view/linux-device-drivers/0596005903/ch11.html
CC-MAIN-2019-30
refinedweb
2,171
59.53
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location. We recommend using Visual Studio 2017 vector::assign Erases a vector and copies the specified elements to the empty vector. - First Position of the first element in the range of elements to be copied. Position of the first element beyond the range of elements to be copied. - Count The number of copies of an element being inserted into the vector. - Val The value of the element being inserted into the vector. - IList The initializer_list containing the elements to insert. After erasing any existing elements in a vector, assign either inserts a specified range of elements from the original vector into a vector or inserts copies of a new element of a specified value into a vector. Example / vector_assign.cpp // compile with: /EHsc #include <vector> #include <iostream> int main() { using namespace std; vector<int> v1, v2, v3; v1.push_back(10); v1.push_back(20); v1.push_back(30); v1.push_back(40); v1.push_back(50); cout << "v1 = "; for (auto& v : v1){ cout << v << " "; } cout << endl; v2.assign(v1.begin(), v1.end()); cout << "v2 = "; for (auto& v : v2){ cout << v << " "; } cout << endl; v3.assign(7, 4); cout << "v3 = "; for (auto& v : v3){ cout << v << " "; } cout << endl; v3.assign({ 5, 6, 7 }); for (auto& v : v3){ cout << v << " "; } cout << endl; } Requirements Header: <vector> Namespace: std Show:
https://msdn.microsoft.com/en-us/library/azbhc96f.aspx
CC-MAIN-2018-13
refinedweb
236
56.96
I wanted to read the vUSB pin to see if usb power was active or not, I didn’t know if the electrons pins could handle 5V in without dividing voltage, if so are all pins 5v tolerant? or should i read on a specific pin? Sorry if this has been asked, but I looked and don’t see it asked for the electron specifically. If there is an easier way to do this also, i’m all ears. This might be a place to find the info you were looking for Especially the Note [1] and there especially the statement in parenthesis Got it, thanks. Also, you can determine if the Electron has external power (USB or VIN) entirely through software without adding any connections or IO pins. #include "Particle.h" bool isUsbPowered(); // forward declaration PMIC pmic; bool lastPowerState = false; void setup() { pmic.begin(); } void loop() { bool powerState = isUsbPowered(); if (powerState != lastPowerState) { Particle.publish("usbPowered", (powerState ? "true" : "false"), 60, PRIVATE); lastPowerState = powerState; } delay(2000); } bool isUsbPowered() { bool isUsbPowered; // Get the system status register from the PMIC byte systemStatus = pmic.getSystemStatus(); // Normally you want to use this. It will return true if the Electron is powered by a USB host (computer/laptop), // USB charger, or the VIN pin. Basically, it really determines if it's externally powered, not USB powered. isUsbPowered = (systemStatus & 0x4) != 0; // Alternatively, you could use this. This will return true if the Electron is powered by a USB host. // It will return false for a USB charger or VIN. // Basically, it's not possible to tell the difference between VIN and USB Charger in software, as far as I can tell // isUsbPowered = ((systemStatus & 0xc0) == 0x40); return isUsbPowered; } /* Fuel Gauge MAX 17043 Power Management BQ24195 //System Status Register //NOTE: This is a read-only register REG08 BIT --- VBUS status 7: VBUS_STAT[1] | 00: Unknown (no input, or DPDM detection incomplete), 01: USB host 6: VBUS_STAT[0] | 10: Adapter port, 11: OTG --- Charging status 5: CHRG_STAT[1] | 00: Not Charging, 01: Pre-charge (<VBATLOWV) 4: CHRG_STAT[0] | 10: Fast Charging, 11: Charge termination done 3: DPM_STAT 0: Not DPM 1: VINDPM or IINDPM 2: PG_STAT 0: Power NO Good :( 1: Power Good :) 1: THERM_STAT 0: Normal 1: In Thermal Regulation (HOT) 0: VSYS_STAT 0: Not in VSYSMIN regulation (BAT > VSYSMIN) 1: In VSYSMIN regulation (BAT < VSYSMIN)ault is 3.5V (101) 0: Reserved */ rickkas, thank you for this, perfect. The datasheet is not clear. It says UART (3) are 3V3. IO pins are FT (fault tolerant). I’m configuring a UART on pins C2 and C3. I shall assume that those pins are FT if configured as IO pins and 3V3 if configured as a UART. FT does not stand for Fault Tolerant but for Five Volt Tolerant (in the Particle context) But sure, the FT is not explicitly mentioned in the UART row. Maybe @rickkas7 can take this one on, since UART may use pull-up resistors and hence the condition of the footnote is not fully satisfied - but for the brief moments the RX pin receives 5V the current via the pull-up won’t hurt IMO. I guess the [4] footnote applies to UART just the same. C2 and C3 are 5-volt tolerant, even in UART mode. There are no pull-ups in UART mode, and note 4 is the one that applies. Since the TX is only 3.3V, it’s not technically 5V TTL serial compatible, since the high voltage is too low. However, it often works. The Electron won’t be damaged by receiving 5V on the RX pin. This applies to the real TX/RX pins, as well as the additional serial ports of the Electron. Thank you for that information. Perfect timing. I sent out for prototype boards a few days ago then over the weekend I thought about 5V tolerance. I redesigned the board with two different 3.3V transceivers. The first boards will arrive this week. I expect they’ll work as you’ve said because that’s how I tested the concept in the first place. I’ll not order the redesigned boards until I see how the first ones work. If we go to Production, of course I’ll do it with 3.3V transceivers.
https://community.particle.io/t/can-pins-take-5v/24427
CC-MAIN-2020-40
refinedweb
709
62.88
_lwp_cond_reltimedwait(2) - get or set a privilege set #include <priv.h> int getppriv(priv_ptype_t which, priv_set_t *set); int setppriv(priv_op_t op, priv_ptype_t which, priv_set_t *set); The getppriv() function returns the process privilege set specified by which in the set pointed to by set. The memory for set is allocated with priv_allocset() and freed with priv_freeset(). Both functions are documented on the priv_addset(3C) manual page. The setppriv() function sets or changes the process privilege set. The op argument specifies the operation and can be one of PRIV_OFF, PRIV_ON or PRIV_SET. The which argument specifies the name of the privilege set. The set argument specifies the set. If op is PRIV_OFF, the privileges in set are removed from the process privilege set specified by which. There are no restrictions on removing privileges from process privileges sets, but the following apply: Privileges removed from PRIV_PERMITTED are silently removed from PRIV_EFFECTIVE. If privileges are removed from PRIV_LIMIT, they are not removed from the other sets until one of exec(2) functions has successfully completed. If op is PRIV_ON, the privileges in set are added to the process privilege set specified by which. The following operations are permitted: Privileges in PRIV_PERMITTED can be added to PRIV_EFFECTIVE without restriction. Privileges in PRIV_PERMITTED can be added to PRIV_INHERITABLE without restriction. All operations that attempt to add privileges that are already present are permitted. If op is PRIV_SET, the privileges in set replace completely the process privilege set specified by which. PRIV_SET is implemented in terms of PRIV_OFF and PRIV_ON. The same restrictions apply. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The getppriv() and setppriv() functions will fail if: The value of op or which is out of range. The set argument points to an illegal address. The setppriv() function will fail if: The application attempted to add privileges to PRIV_LIMIT or PRIV_PERMITTED, or the application attempted to add privileges to PRIV_INHERITABLE or PRIV_EFFECTIVE which were not in PRIV_PERMITTED. See attributes(5) for descriptions of the following attributes: priv_addset(3C), attributes(5), privileges(5)
http://docs.oracle.com/cd/E26505_01/html/816-5167/getppriv-2.html
CC-MAIN-2016-44
refinedweb
348
57.16
13 July 2011 18:42 [Source: ICIS news] LONDON (ICIS)--BASF’s Styrolution joint venture with INEOS is “the right move” for the Germany-based international chemicals major as it seeks to reduce the role of basic chemicals in its portfolio, an analyst said on Wednesday. “BASF has been for years now pursuing a strategy to reduce [basic chemicals] as it increasingly bets on specialty chemicals,” said Lars Hettche, analyst at Bankhaus Metzler, a ?xml:namespace> As such, it is right for BASF to shift its styrenics operations into the Styrolution venture and to eventually exit that business entirely, he said. However, Hettche said it remains unclear when BASF will be able to completely withdraw from styrenics. “I think it may take two or three years for BASF to completely exit that business,” depending on market conditions and agreements with partner INEOS, he said. Meanwhile, the €600m ($845m) BASF will receive from Styrolution as part of the deal will help the chemicals major to reduce its debts, the analyst added. BASF and INEOS signed a contract to form Styrolution in May. The European Commission has approved the joint venture but the transaction has not yet closed. Hettche said going forward BASF will continue to benefit from its integrated production structure at key hubs such as In addition, BASF’s Wintershall oil and gas business will provide a hedge against rising raw materials costs. “That means, while competitors will see their margins shrink as oil and gas prices rise, BASF – on a group basis – will not see an influence on results,” he said. ($1 = €0.71)
http://www.icis.com/Articles/2011/07/13/9477306/basfs-styrolution-right-move-for-chemicals-major.html
CC-MAIN-2015-06
refinedweb
265
53.75
Builder - Build XML, HTML, CSS and other outputs in blocks Version 0.06 Simple example.... use Builder; my $builder = Builder->new; my $xm = $builder->block( 'Builder::XML' ); $xm->parent( $xm->child( 'Hi Mum!' ) ); say $builder->render; # => <parent><child>Hi Mum!</child></parent> Another example using same block object.... $xm->body( $xm->div( $xm->span( { id => 1 }, 'one' ), $xm->span( { id => 2 }, 'two' ), ), ); say $builder->render; # => <body><div><span id="1">one</span><span id="2">two</span>/div></body> And finally something a bit more whizzy.... my $rainbow = $builder->block( 'Builder::XML', { indent => 4, newline => 1 } ); $rainbow->colours( sub { for my $colour qw/red green blue/ { $rainbow->$colour( uc $colour ); } }); say $builder->render; # <colours> # <red>RED</red> # <green>GREEN</green> # <blue>BLUE</blue> # </colours> If you need to build structured output then Builder will be exactly what you you've always been waiting for! Just select and/or tailor the blocks you need then simply click them all together to construct the output of your dreams! First we need to create the stack / buffer / scaffold / bucket / zimmerframe (pick your favourite term) object.... use Builder; my $builder = Builder->new; Then you create the blocks associated with this build object.... my $xm = $builder->block( 'Builder::XML' ); my $ns = $builder->block( 'Builder::XML', { namespace => 'baz' } ); Then build your output using these blocks.... $xm->fubar( $xm->before( 'foo' ), $ns->during( 'I3az' ), $xm->after( 'bar' ), ); Continue to add more blocks to hearts content until happy then render it..... my $output = $builder->render; # <fubar><before>foo</before><baz:during>I3az</baz:during><after>bar</after><fubar> Remove the smoke and mirrors and all you are left with is parameter context. Each block component will have its own parameter context. For example when Builder::XML receives no parameters then it will return a closed tag.... $xm->br; # => <br /> For more information see relevant Builder::* block docs. Nothing (at this moment!) By default the constructor will maintain an internal stack (buffer) of the blocks being built. my $builder = Builder->new; This is then later returned (processed) using render method on this object. Using the output named parameter changes default behaviour to immediately output the blocks to the filehandle provided. my $builder = Builder->new( output => \*STDOUT ); There are no other parameters used by constructor. Creates a block in this stack. First arg is the block to use, for eg. 'Builder::XML'. Second arg must be a hashref of options (named parameters). my $builder = Builder->new(); my $xm = $builder->block( 'Builder::XML', { cdata => 1 } ); For options that can be passed as args please see relevant Builder::* documentation. Renders all the blocks for the requested builder stack returning the information. my $output = $builder->render; The render method will automatically flush the builder stack (by calling this method). Unlikely this will be of any use in the outside world! $builder->flush; # there goes all the blocks I just built ;-( Barry Walsh <draegtun at cpan.org> Yep there was some... more on that later! You can also look for information at: My main inspiration came primarily from Builder for Ruby and also a little bit from Groovy Builders Class::XML, XML::Generator GitHub at This is (near) beta software. I'll strive to make it better each and every day! However I accept no liability whatsoever should this software do what you expected ;-) Copyright 2008-2013 Barry Walsh (Draegtun Systems Ltd |), all rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Builder/lib/Builder.pm
CC-MAIN-2015-06
refinedweb
581
65.42
The Metaprogramming In Production On Python Part 1 Many people think that metaprogramming in Python unnecessarily complicates the code, but if you use it correctly, you can quickly and elegantly implement complex design patterns. In addition, well-known Python frameworks such as Django, DRF, and SQLAlchemy use metaclasses to provide easy extensibility and simple code reuse. In this article, I’ll tell you why you shouldn’t be afraid to use metaprogramming in your projects and show you what tasks it is best for. You can learn more about metaprogramming capabilities in the Advanced Python course. For first, let’s recall the basics of metaprogramming in Python. It is not superfluous to add that all that is written below refers to the version of Python 3.5 and higher. A brief excursion into the Python data model So, we all know that everything in Python is an object, and it is no secret that for each object there is a certain class by which it was generated, for example: >>> def f(): pass >>> type(f) <class 'function'> The type of the object or the class by which the object was spawned can be determined using the built-in type function, which has a rather interesting call signature (it will be discussed later). The same effect can be achieved if you display the attribute __class__ from any object. So, to create functions is a kind of built-in function. Let’s see what we can do with it. To do this, take the blank from the built-in types module: >>> from types import FunctionType >>> FunctionType <class 'function'> >>> help(FunctionType) class function. As we can see, any function in Python is an instance of the class described above. Let’s now try to create a new function without resorting to its declaration via def. To do this, we need to learn how to create code objects using the built-in interpreter function compile: # create a code object that prints the string "Hello, world!" >>> code = compile ('print ("Hello, world!")', '<repl>', 'eval') >>> code <code object <module> at 0xdeadbeef, file "<repl>", line 1> # create a function by passing the code object to the constructor, # global variables and function name >>> func = FunctionType (code, globals (), 'greetings') >>> func <function <module> at 0xcafefeed> >>> func .__ name__ 'greetings' >>> func () Hello, world! Fine! With the help of meta-tools, we learned to create functions on the fly, but in practice, this knowledge is rarely used. Now let’s take a look at how class objects and instance objects of these classes are created: >>> class User: pass >>> user = User() >>> type(user) <class '__main__.User'> >>> type(User) <class 'type'> Obviously, the User class is used to create an instance of the user, it is much more interesting to look at the type class, which is used to create the User class itself. Here we turn to the second variant of calling the built-in type function, which is also a metaclass for any class in Python. A metaclass is, by definition, a class whose instance is another class. Metaclasses allow us to customize the process of creating a class and partially manage the process of creating an instance of a class. According to the documentation, the second variant of the type (name, bases, attrs) returns a new data type or, if simple, a new class, and the name attribute becomes the __name__ attribute of the returned class, bases – the list of parent classes will be available as __bases__ and attrs, a dict-like object containing all attributes and methods of the class, will become __dict__. The principle of operation of the function can be described as a simple pseudocode in Python: type(name, bases, attrs) ~ class name(bases): attrs Let’s see how we can, using only the type call, construct a completely new class: >>> User = type('User', (), {}) >>> User <class '__main__.User'> As you can see, we do not need to use the class keyword to create a new class, the type function copes without it, now let’s consider an example more complicated: class User: def __init __ (self, name): self.name = name class SuperUser (User): "" "Encapsulate domain logic to work with super users" "" group_name = 'admin' @property def login (self): return f '{self.group_name} / {self.name}'. lower () # Now create an analogue of the class SuperUser "dynamically" CustomSuperUser = type ( # Class name 'SuperUser', # List of classes from which the new class is inherited (User), # Attributes and methods of the new class in the form of a dictionary { '__doc__': 'Encapsulate the domain logic to work with super users', 'group_name': 'admin', 'login': property (lambda self: f '{self.group_name} / {self.name}'. lower ()), } ) assert SuperUser .__ doc__ == CustomSuperUser .__ doc__ assert SuperUser ('Vladimir'). login == CustomSuperUser ('Vladimir'). login As can be seen from the examples above, the description of classes and functions using the class and def keywords is just syntactic sugar, and any types of objects can be created with ordinary calls to built-in functions. And now, finally, let’s talk about how you can use dynamic class creation in real projects. Dynamic creation of forms and validators Sometimes we need to validate information from the user or from other external sources according to a previously known data scheme. For example, we want to change the user login form from the admin panel — remove and add fields, change their validation strategy, etc. To illustrate this, we will try to dynamically create a Django form, the description of the scheme of which is stored in the following json format: { "fist_name": { "type": "str", "max_length": 25 }, "last_name": { "type": "str", "max_length": 30 }, "age": { "type": "int", "min_value": 18, "max_value": 99 } } Now, based on the description above, we will create a set of fields and a new form using the type of function we already know: import json from django import forms fields_type_map = { 'str': forms.CharField, 'int': forms.IntegerField, } # form_description - our json format description deserialized_form_description: dict = json.loads (form_description) form_attrs = {} # select the class of the object of the field in the form, depending on its type for field_name, field_description in deserialized_form_description.items (): field_class = fields_type_map [field_description.pop ('type')] form_attrs [field_name] = field_class (** field_description) user_form_class = type ('DynamicForm', (forms.Form,), form_attrs) >>> form = user_form_class ({'age': 101}) >>> form <DynamicForm bound = True, valid = Unknown, fields = (fist_name; last_name; age)> >>> form.is_valid () False >>> form.errors {'fist_name': ['This field is required.'], 'last_name': ['This field is required.'], 'age': ['Ensure this value is less than or equal to 99.']} Super! Now you can transfer the created form to a template and render it to the user. The same approach can be used with other frameworks for validation and presentation of data (DRF Serializers, marshmallow, and others). Configuring the creation of a new class through the metaclass Above, we looked at the “ready” metaclass type, but most often in the code, you will create your own metaclasses and use them to configure the creation of new classes and their instances. In general, the “blank” metaclass looks like this: class MetaClass (type): "" " Description of accepted parameters: mcs is a metaclass object, for example <__ main __. MetaClass> name - string, the name of the class for which it is used This metaclass, for example "User" bases - a tuple of parent classes, for example (SomeMixin, AbstractUser) attrs - a dict-like object that stores the values of attributes and methods of a class. cls - created class, for example <__ main __. User> extra_kwargs - additional keyword arguments passed to the class signature args and kwargs - the arguments passed to the class constructor when creating a new instance "" " def __new __ (mcs, name, bases, attrs, ** extra_kwargs): return super () .__ new __ (mcs, name, bases, attrs) def __init __ (cls, name, bases, attrs, ** extra_kwargs): super () .__ init __ (cls) @classmethod def __prepare __ (mcs, cls, bases, ** extra_kwargs): return super () .__ prepare __ (mcs, cls, bases, ** kwargs) def __call __ (cls, * args, ** kwargs): return super () .__ call __ (* args, ** kwargs) To use this metaclass to configure the User class, use the following syntax: class User (metaclass = MetaClass): def __new __ (cls, name): return super () .__ new __ (cls) def __init __ (self, name): self.name = name The most interesting thing is the order in which the Python interpreter invokes the metamethods of the metaclass at the time of the creation of the class itself: - The interpreter identifies and finds parent classes for the current class (if any). - The interpreter defines the metaclass (MetaClass in our case). - The method is called MetaClass.__ prepare__ – it must return a dict-like object in which the attributes and methods of the class will be written. After that, the object will be passed to the method MetaClass.__ new__ through the argument attrs. We will talk about the practical use of this method a little later in the examples. - The interpreter reads the body of the User class and forms the parameters for transferring them to the MetaClass. - The method is called MetaClass.__ new__ is a constructor method, returns the created class object. We have already met with the arguments name, bases and attrs when we passed them to the type function, and we will talk about the parameter ** extra_kwargs a bit later. If the type of the attrs argument has been changed using __prepare__, then it must be converted to dict before being passed to the super () method call. - The method is invoked MetaClass.__ init__ – an initializer method with which you can add additional attributes and methods to a class object. In practice, it is used in cases when metaclasses are inherited from other metaclasses, otherwise, everything that can be done in __init__, it is better to do in __new__. For example, the __slots__ parameter can be set only in the __new__ method by writing it to the attrs object. - At this step, the class is considered to be created. And now let’s create an instance of our User class and look at the call chain: - At the moment User (…) is called, the interpreter calls the method MetaClass.__call __ (name = ‘Alyosha’), where it passes the class object and the arguments passed. - MetaClass .__ call__ calls User .__ new __ (name = ‘Alyosha’) – a constructor method that creates and returns an instance of the User class - Next, MetaClass .__ call__ calls User .__ init __ (name = ‘Alyosha’) – an initialization method that adds new attributes to the created instance. - MetaClass .__ call__ returns the created and initialized instance of the User class. - At this point, an instance of the class is considered to be created. This description, of course, does not cover all the nuances of the use of metaclasses, but it is enough to start applying metaprogramming to implement some architectural patterns. Forward to the examples! Abstract classes And the very first example can be found in the standard library: ABCMeta – the metaclass allows you to declare any of our classes to be abstract and force all of its heirs to implement predefined methods, properties, and attributes, look at this: from abc import ABCMeta, abstractmethod class BasePlugin (metaclass = ABCMeta): "" " The supported_formats class attribute and the run method must be implemented in the heirs of this class "" " @property @abstractmethod def supported_formats (self) -> list: pass @abstractmethod def run (self, input_data: dict): pass If all abstract methods and attributes are not implemented in the heir, then when we try to create an instance of the heir class, we will get a TypeError: class VideoPlugin(BasePlugin): def run(self): print('Processing video...') plugin = VideoPlugin() # TypeError: Can't instantiate abstract class VideoPlugin # with abstract methods supported_formats Using abstract classes helps to fix the interface of the base class immediately and to avoid errors in future inheritance, for example, typos in the name of the overridden method. Plugin system with automatic registration Quite often, metaprogramming is used to implement various design patterns. Almost any known framework uses metaclasses to create registry objects. Such objects store references to other objects and allows them to be quickly received anywhere in the program. Consider a simple example of auto-registration of plug-ins for playing media files of various formats. Metaclass implementation: class RegistryMeta (ABCMeta): "" " The metaclass that creates the registry from the classes of heirs. The registry stores links like "file format" -> "plugin class" "" " _registry_formats = {} def __new __ (mcs, name, bases, attrs): cls: 'BasePlugin' = super () .__ new __ (mcs, name, bases, attrs) # do not handle abstract classes (BasePlugin) if inspect.isabstract (cls): return cls for media_format in cls.supported_formats: if media_format in mcs._registry_formats: raise ValueError (f'Format {media_format} is already registered ') # save link to plugin in registry mcs._registry_formats [media_format] = cls return cls @classmethod def get_plugin (mcs, media_format: str): try: return mcs._registry_formats [media_format] except KeyError: raise RuntimeError (f'Plugin is not defined for {media_format} ') @classmethod def show_registry (mcs): from pprint import pprint pprint (mcs._registry_formats) And here are the plugins themselves, let’s take the BasePlugin implementation from the previous example: class BasePlugin(metaclass=RegistryMeta): ... class VideoPlugin(BasePlugin): supported_formats = ['mpg', 'mov'] def run(self): ... class AudioPlugin(BasePlugin): supported_formats = ['mp3', 'flac'] def run(self): ... After the interpreter executes this code, 4 formats and 2 plug-ins that can process these formats will be registered in our registry: >>> RegistryMeta.show_registry() {'flac': <class '__main__.AudioPlugin'>, 'mov': <class '__main__.VideoPlugin'>, 'mp3': <class '__main__.AudioPlugin'>, 'mpg': <class '__main__.VideoPlugin'>} >>> plugin_class = RegistryMeta.get_plugin('mov') >>> plugin_class <class '__main__.VideoPlugin'> >>> plugin_class().run() Processing video... Here it is worth noting another interesting nuance of working with metaclasses, thanks to the unobvious method resolution order, we can call the show_registry method not only on the RegistyMeta class but on any other class whose metaclass is: >>> AudioPlugin.get_plugin('avi') # RuntimeError: Plugin is not found for avi Using attribute names as metadata Using metaclasses, you can use class attribute names as metadata for other objects. Nothing is clear? But I’m sure you’ve already seen this approach many times, for example, the declarative declaration of model fields in Django: class Book(models.Model): title = models.Charfield(max_length=250) In the example above, the title is the name of the Python identifier; it is also used for the title of the column in the book table, although we have not explicitly indicated this anywhere. Yes, such “magic” can be implemented using metaprogramming. Let us, for example, implement an application error transfer system on the front-end, so that each message has readable code that can be used to translate a message into another language. So, we have a message object that can be converted to json: class Message: def __init__(self, text, code=None): self.text = text self.code = code def to_json(self): return json.dumps({'text': self.text, 'code': self.code}) All our error messages will be stored in a separate “namespace”: class Messages: not_found = Message('Resource not found') bad_request = Message('Request body is invalid') ... >>> Messages.not_found.to_json() {"text": "Resource not found", "code": null} Now we want the code to become not null, but not_found, for this we will write the following metaclass: class MetaMessage (type): def __new __ (mcs, name, bases, attrs): for attr, value in attrs.items (): # pass through all the attributes described in the class with the Message type # and replace the code field with the attribute name # (if code is not set in advance) if isinstance (value, Message) and value.code is None: value.code = attr return super () .__ new __ (mcs, name, bases, attrs) class Messages (metaclass = MetaMessage): ... Let’s see how our messages look now: >>> Messages.not_found.to_json() {"text": "Resource not found", "code": "not_found"} >>> Messages.bad_request.to_json() {"text": "Request body is invalid", "code": "bad_request"} What you need! Now you know what to do so that by the format of the data you can easily find the code that processes them. Caching class metadata and its heirs Another frequent case is caching of any static data at the stage of class creation, in order not to waste time on their calculation while the application is running. In addition, some data can be updated when creating new instances of classes, for example, the count of the number of objects created. How can this be used? Suppose you are developing a framework for building reports and tables, and you have such an object: class Row (metaclass = MetaRow): name: str age: int ... def __init __ (self, ** kwargs): self.counter = None for attr, value in kwargs.items (): setattr (self, attr, value) def __str __ (self): out = [self.counter] # attribute __header__ will be dynamically added in the metaclass for name in self .__ header __ [1:]: out.append (getattr (self, name, 'N / A')) return '| '.join (map (str, out)) We want to save and increase the counter when creating a new series, and also want to generate the header of the resulting table in advance. Metaclass to the rescue! class MetaRow (type): # global counter of all rows created row_count = 0 def __new __ (mcs, name, bases, attrs): cls = super () .__ new __ (mcs, name, bases, attrs) # Cache a list of all fields in a row sorted alphabetically cls .__ header__ = ['No.'] + sorted (attrs ['__ annotations __']. keys ()) return cls def __call __ (cls, * args, ** kwargs): # creating a new series takes place here row: 'Row' = super () .__ call __ (* args, ** kwargs) # increment global counter cls.row_count + = 1 # set the current row number row.counter = cls.row_count return row Here you need to clarify 2 things: - The Row class has no class attributes with the names name and age — these are type annotations, so they are not in the attrs dictionary keys, and to get the list of fields, we use the class attribute __annotations__. - The operation cls.row_count + = 1 should have misled you: how is that? After all, cls is a class Row it does not have an attribute row_count. That’s right, but as I explained above – if the created class does not have an attribute or method that they are trying to call, then the interpreter goes further along the chain of base classes – if there is no one, the metaclass is searched. In such cases, in order not to confuse anyone, it is better to use another record: MetaRow.row_count + = 1. See how elegantly you can now display the entire table: rows = [ Row(name='Valentin', age=25), Row(name='Sergey', age=33), Row(name='Gosha'), ] print(' | '.join(Row.__header__)) for row in rows: print(row) № | age | name 1 | 25 | Valentin 2 | 33 | Sergey 3 | N/A | Gosha By the way, the display and work with the table can be encapsulated in any separate class Sheet. To be continued… In the next part of this article, I will explain how to use metaclasses to debug your application code, how to parameterize the creation of a metaclass and show the main examples of using the __prepare__ method. Stay tuned! In more detail, about metaclasses and descriptors in Python, I will talk in the framework of Advanced Python intensive.
https://www.smartspate.com/the-metaprogramming-in-production-on-python-part-1/
CC-MAIN-2020-50
refinedweb
3,104
61.46
Condition variables let you block until a condition is met. For example, let's say that we're writing a little TCP server that can have up to MAX_CLIENTS connected. We might start with: import ( "net" "sync" ) type Server struct { sync.Mutex clients int } func (s *Server) Listen(address string) { l, err := net.Listen("tcp", address) if err != nil { panic(err) } defer l.Close() for { conn, err := l.Accept() if err != nil { //to do log this continue } s.Lock() s.clients++ s.Unlock() go s.handleClient(conn) } } func (s *Server) handleClient(conn net.Conn) { defer s.disconnected() for { // ... } } func (s *Server) disconnected() { s.Lock() s.clients-- s.Unlock() } One way to limit the total number of clients would be to check the value of s.clients within a loop: for { s.Lock() for s.clients == MAX_CLIENTS { s.Unlock() time.Sleep(time.Second) s.Lock() } s.Unlock() conn, err := l.Accept() ... } A more elegant solution is to use a condition variable. Condition variables provide a simple mechanism which our goroutines can use to signal a change to s.clients. First, we define the condition variable: import ( "net" "sync" "sync/atomic" ) type Server struct { clients uint64 cond *sync.Cond } Condition variable are made up of their own mutex. To iniate one, we'd do: s := &Server{ cond: &sync.Cond{L: &sync.Mutex{}}, } Next, instead of the above for spin, we can Wait for a signal: for { s.cond.L.Lock() for s.listeners == MAX_CLIENTS { s.cond.Wait() } s.cond.L.Unlock() conn, err := l.Accept() ... } And we change our disconnected method: func (s *Server) disconnected() { s.cond.L.Lock() s.clients-- s.cond.L.Unlock() s.cond.Signal() } There are a couple interesting things in the above code. First of all, notice the locking and unlocking around the call to Wait. It might seem like we're locking for a very long time. But Wait unlocks L on entry and relocks L on exit. This results in much cleaner code -- you lock and unlock normally, without being locked while you wait. Also, notice that we're still checking our condition inside of a loop. This is because the state of s.clients could be changed by a different goroutine between the time that the signal is sent and our code exiting Wait. (In this specific example, when the blocked goroutine is also the only one that can increment s.clients, the for loop is unecessary. But I wanted to show the for loop example anyways because it's more complete and more common).
https://www.openmymind.net/Condition-Variables/
CC-MAIN-2020-05
refinedweb
421
70.6
A one-hop neighbor. More... #include <neighbor.hh> A one-hop neighbor. Return if this neighbor is advertised in TC messages. Return the MPR candidacy of this neighbor. MPR candidacy is computed whenever Neighbor's link state changes. TODO: NOTE WELL: If a neighbor is not reachable due to no good ETX links, it MUST NOT be considered as an MPR candidate. Currently the code does not take account of this. Return the previously stored result of MPR selection. This is computed by the deferred MPR selection task in Neighborhood. Re-count the degree of this strict one-hop Neighbor. Section 8.3.1 defines the degree of a one-hop neighbor as the number of symmetric neighbors of the neighbor, excluding any members of the one-hop neighborhood, and excluding this node. Triggered by a two-hop link state change. TODO: Distribute the computation by pushing the responsibility for signalling the change of state to TwoHopNeighbor, rather than doing it here. Update the MPR candidacy of this neighbor. Triggered by a one-hop or two-hop link state change. The Neighborhood is notified of the change. true if neighbor is an MPR. Note: This may be out of sync with the actual MPR state; it does not get updated until an MPR selection is performed from Neighborhood (the computation is not fully distributed).
http://xorp.org/releases/current/docs/kdoc/html/classNeighbor.html
CC-MAIN-2018-05
refinedweb
224
68.06
G&K Services doesn't get a lot of public attention, even though a wide swath of North American workers have likely worn a uniform or used paper towels supplied by the more than 100-year-old company. Likewise, CEO Douglas Milroy isn't exactly a household name -- but he deserves a round of applause. After more than seven years at the helm of G&K, Milroy struck a deal to sell the company to rival Cintas for $2.2 billion, including debt. Talk about selling high -- the purchase price is a 19 percent premium to Monday's close. which was itself a record. But for G&K shareholders, his smart stewardship has become par for the course. Under Milroy's watch, shareholders have reaped a roughly fivefold return (including dividends). That's about double what relevant benchmark indexes have yielded over the same stretch and also greater than the return delivered by Cintas, according to data compiled by Bloomberg. The CEO took an ax to the company's cost structure and churned out consistent improvements to both the operating margin and return on invested capital. G&K has missed analysts' earnings estimates just three times in the last 29 quarters. After all the productivity gains, G&K had been hoping to rely more on strong, stable revenue growth to fuel its next phase. On that front, the company ran into some issues not of its own making. Energy firms slashing jobs amid the slump in commodity prices aren't using as many uniforms and safety gear as they once were, while industrial and manufacturing companies are grappling with the sting of slowing global growth. As such, analysts were growing a bit wary of G&K's ability to reach ever-higher stock prices. One way to remedy the growth issue is to just buy it, something Milroy said as recently as June that he was thinking about -- but only at the right valuation. Here's what the CEO told a Robert W. Baird conference: "We've gotten the kind of results you've seen through a lot of discipline and we're not going to give it up in the M&A markets." Why not lock in your gains and let someone else offer you a nice plump valuation instead? In its deal with Cintas, G&K is commanding about 13 times its projected Ebitda for fiscal 2017 . There aren't many perfect M&A comparisons for such a niche industry, but that valuation exceeds the median paid for recent deals of size in the commercial-services industry at large. The purchase price may yet swell. As part of its agreement with Cintas, G&K agreed to cease existing previously conducted discussions with any parties about an acquisition offer -- which suggests that Cintas wasn't the only one who was interested. That said, the stock isn't trading past the offer price, so traders seem to see a counter-bid as unlikely. Either way, Milroy should feel good about a job well done. This column does not necessarily reflect the opinion of Bloomberg LP and its owners. Data is calculated on a weekly basis through the end of last week. For what it's worth, this is also a pretty great deal for Cintas, which is targeting as much as $140 million of annual synergies. The shares climbed as much as 9.5 percent on Tuesday. To contact the author of this story: Brooke Sutherland in New York at bsutherland7@bloomberg.net To contact the editor responsible for this story: Beth Williams at bewilliams@bloomberg.net
https://www.bloomberg.com/gadfly/articles/2016-08-16/g-k-services-is-going-out-on-top-thanks-to-its-ceo
CC-MAIN-2017-34
refinedweb
597
62.17
So, I know the question sounds strange but I don't really know how to phrase it better. I'll try to explain: I'm writing my own version of printf, this is the code I have so far: #include...... I'm trying to implement a function that receives an array of integers and its capacity and fills the array with all the values read from the keyboard which are prime numbers, without overflowing it,... Thanks, now I'm trying to get the maximum hour from the file and if two hours are equal then check their minutes to sort them in descending order. Would it be easier to use a structure in this case?... I want to implement a program that reads data from a file specified as a command-line argument, having the following format: username, hh, mm where the fields are separated by a comma and might... I'm kind of a newbie in C and I haven't learned about setlocale yet Okay I have no more warnings: #include <stdio.h> #include <string.h> #include <ctype.h> #include <stdlib.h> #define MAX 100 unsigned checkWord(char *word) { Haha, that's actually what I wanted to do so I would spare myself of all the work but I thought I'd solve it the way it was stated first Write a function that reads data from standard input until EOF is found and returns the maximum of the real numbers found into the input. We consider a real number as a number composed of two... I want to write a function that splits a string into words (strings with no whitespace), returning a (dynamically allocated) array of pointers to (dynamically allocated) copies of the words in the... Thanks a lot, it's so much clearer now! Just to make sure I understood fully, you wrote i < cap - 1 because we don't take the terminator '\0' into a account, right?
https://cboard.cprogramming.com/search.php?s=e0043cdad87b4c99ecbbda81565fbae7&searchid=6521913
CC-MAIN-2021-17
refinedweb
323
67.69
I am trying to write a usb mass storage driver. I don't have a storage device, so I am trying to allocate some memory. below is the program I wrote to emulate a 512*2048 (1 MB) size usb drive #include <stdio.h> #include <stdlib.h> char ** called_func(char **mem, char start_sec, char num_of_sec) { char **read_mem, row, column; read_mem = (char **)malloc(sizeof(char *) * 512 * num_of_sec); for(row = 0 ; row < num_of_sec ; row++) for(column = 0 ; column < 512 ; column++ ) read_mem[row][column] = mem[row + start_sec][column]; return read_mem; } int main() { char **mem, row , column, i, j; char **ret; mem = (char **)malloc(sizeof(char *) * 512 * 2048); if(mem == NULL) printf("allocation failed\n"); for(row = 0; row < 2048 ; row++) for(column = 0 ; column < 512 ; column++) mem[row][column] = 'a'; ret = called_func(mem, 2, 6);//start sec, num of sectors this time I am going to read from the second sector, and total 6 sectors for(i=0;i<2;i++) { for(j=0;j<6;j++) { printf("ret[i][j] = %c",i, j, ret[i][j]); } printf("\n"); } return 0; } The first problem is that you are not allocating memory for each char*. mem is a pointer to a pointer. This means this value will need allocation (as you do): mem = (char **)malloc(sizeof(char*) * 2048); You can remove the 512 from your code because this only needs memory for the pointers themselves not the actual characters in the pointers. Then you need to malloc each individual char*. You could do this inside your for loop: mem[row] = malloc(513); This allocates memory for each char pointer. Keep in mind you need to add space for the null character at the end. To free these you are going to need a loop similar to this: for(row = 0; row < 2048; row++) { mem[row]; } free(mem); That will get rid of the individual pointers and the double pointer. This is enough to get you started but there are a few other issues with your code. I would recommend either using some type of debugger or simply adding print statements to watch what is happening with the code as it runs.
https://codedump.io/share/VbotAdij0BJx/1/what-is-wrong-with-this-c-program
CC-MAIN-2016-50
refinedweb
357
64.24
URL: <> Summary: By default binaries should be built with '-g -O2' Project: GNUstep Submitted by: yavor Submitted on: Tue 11 Aug 2009 12:02:16 AM EEST Category: Makefiles Severity: 3 - Normal Item Group: Change Request Status: None Privacy: Public Assigned to: None Open/Closed: Open Discussion Lock: Any _______________________________________________________ Details: This change 2006-09-20 Nicola Pero <address@hidden> By default compile everything with debug=yes. To get the traditional behaviour, please use 'make debug=no'. * common.make (debug): Turn on debug by default. and the resulting comment and behavior: # Enable debug by default. This is according to the GNU Coding # Standards. ifneq ($(debug), no) debug = yes endif ifeq ($(debug), yes) # This is filtered out as it compromised debugging OPTFLAG := $(filter-out -O%, $(OPTFLAG)) ... is wrong. The GNU Coding Standards require that packages should be built with debugging symbols (-g) by default, but does not go so far in agressively removing any optimization. For example, '(standards)Configuration' says: `VARIABLE=VALUE' ... For example, the user could issue `configure CFLAGS=-g CXXFLAGS=-g' to build with debugging information and without the default optimization. For any package that uses the GNU Build System, if CFLAGS/CXXFLAGS/OBJCFLAGS/etc are not specified by the user, they are set to '-g -O2'. This is a long time tradition, and GNUstep Make behaved this way until the above change. This is causing us grief in Debian as the Debian Policy requires all binaries to be built with -O2 (unless there is a good reason for -O3); unoptimized builds are considered a bug so we have lots of "buggy" GNUstep packages now (from a Policy perspective, at least). _______________________________________________________ Reply to this item at: <> _______________________________________________ Message sent via/by Savannah
http://lists.gnu.org/archive/html/bug-gnustep/2009-08/msg00022.html
CC-MAIN-2015-35
refinedweb
284
50.67
RE: [xml-doc] XML and technical writing Expand Messages - On Wed, 2003-02-19 at 10:46, Sean Wheller wrote: > > true of most schemes. its not inherent in the DTD, but the supportNo, DTDs and Schemes define the structure of XML files, they do not > > package > > This is a very cryptic statement. But if I decrypt it correctly, then I > cannot agree with that statement. My understanding is that DTD/Scheme's are > vocabularies to describe data stored in xml files, not how to display it. describe the data. To say that an <a> may contain a <b> and a <c>, both of which should contain integer numbers, says absolutely nothing about what that data might represent > The Opening statement of the OEBPS says "The Open eBook Publicationyes, OEB is about much more than a DTD, its about a system, >." and an assumption of meaning of XML elements > Though eBook has its uses as does DocBook, TEI and the DTDand I think that is largely orthoganal to which DTD you use. you > provided with DITA. They are just a tools. I would just like to be certain > of using the tool that enables me to obtain any format, electronic or print. want a processing system, not just a DTD. > What I think the world wants is a format that can be used to obtain alland I claim that DTDs have little or no influence on the processing > subsets. I am convinced that DocBook does not do that and I continue to > explore new options. I want a DTD that enables support across all > applications application > . So a document can be part USUPTO,fine. use schemas and > part NewsML, part DocBook, part DITA, part TEI, part VoXML, part WSDL, part > WML, part any other DTD or subset and still validate. namespaces, and you can create something that is technically a valid XML file. and then what? if your down-wind application cannot grok all those elements, you are nowhere. what I am saying is "defining a vocabulary does not give you an end-user application" -- Sebastian Rahtz OUCS Information Manager 13 Banbury Road, Oxford OX2 6NN. Phone +44 1865 283431 - On Monday 17 February 2003 08:25, Sean Wheller wrote: > Ian,Sorry, can't help there - technical documents ain't my bag, but it would seem > > Other than DocBook, DITA, TEI DTD's, can you suggest others that may work > well for Technical Documents. I have not found any for Technical Authoring > purpose and so, like thousands of others, reference a known standard with a > large support base of users. Mainly DocBook. from a distance that the sort of documentation that technical people create is a fairly close fit to the sort of schemes that can encapsulate that kind of editorial, as they're invariably created by quite similar kinds of people with similar mindsets. It's almost as if, and it wouldn't surprise me to find to be the case, that an awful lot of DocBook is used to write about DocBook (which I know is not the case, but it seems that way sometimes). Or, to put it another way, technical authors are just precisely the sort of people who would have come up with the concepts behind something like DocBook in the first place, first of all - others in other fields would have followed, because not everybody spends their time submersed in DTDs, XML, or even a conceptual awareness of the structure (if any) of their own subject area. That's what I mean about taking a look at DocBook, seeing how it approaches what it does, then roll your own. > DocBook was not designed for magazine publishing, story writing or poetry.Exactly. And to add to that, I'd add that because DocBook became predominant > It is for computer based materials, the format of which is by nature very > structured. As there is a lack of DTD's in the niches you mention people do > tend to use DocBook for other purposes. I am not sure this is the solution, > but it certainly can serve as a base with which to build new DTD's. in it's own area pretty soonish, it's had time to work out paradigm bugs and general approaches, and by experts in documentation and document types. Other 'normal' people in the rest of the world of topics are less likely to be as well versed in how to knock up a DTD or even how to factor down their own problem space - typically because they spend all their lives in it, so they never get to see the architectural structure of their own expertise from the outside. In a way, it was easier for the technical documentation practitioners, because they're only next door to the concept of literate markup and factoring of topic structure. Other people live further away, so the job is potentially harder - unless, like I say, they derive influence from DocBook and roll their own. > For more creative tasks I think that XML still has some way to go. I wouldI'd say it's got to get to that stage, and soonish, otherwise it simply > be interested in hearing of any other DTD's technical or creative. I also > like to keep tabs on eBook, but the problem is that it is very e-centric > and I don't think everyone will standardize on a single display format. So > for now, DocBook let's me transform to formats of all types, including > print. becomes a (admittedly widespread) domain of wizardry and esoterica. Well, maybe not that bad, but, well, the facility to feed topic in and get document structure out the other end in the form of a document type (is there a more abstract term for a topic or subject architecture or construction than by directly referring to it as a DTD or Schema?) thingy. I mean, to give an example - if I were to create a special editorial encapsulation language for my own CV for job applications it might consist of a handful of elements: <name/> <contact-details/> and <job-description/> <bullshit/> - which is a pretty simple structure in and of itself. I very much advocate a roll your own stance for each individual need, but at the same time, I very much want mistakes that have already been made and ironed out in other cases elsewhere in time and space to be learned from, somehow. I suppose there might be a form of 'best practice' (I hate that term - why isn't there a 'worst practice'?) approach to factoring down a topic area into something that fits like a glove in places where DocBook clearly wont (ie, the rest of the world). It's all good stuff. -- Ian Tindale - [Post to XML-Doc] Sam Wheller wrote to XML-Doc: > The Opening statement of the OEBPS says "The Open eBook PublicationIt may be easier to send men to Mars than to come up with "The >." > > As a subset of XHTML, the OEBPS is more about like it's HTML parent, a > language to define how a user agent will display data. Though as an XHTML > subset it conforms to the XML compliance list, mainly it has to be well > formed. In the opening of the spec "Purpose and Scope" the document states > "In order for electronic-book technology to achieve widespread success in > the marketplace, Reading Systems must have convenient access to a large > number and variety of titles. The Open eBook Publication Structure (OEBPS) > is a specification for representing the content of electronic books." > > It continues to define the relationship to other technologies. > > <quote> > This specification combines subsets and applications of other > specifications. Together, these facilitate the construction, organization, > presentation, and unambiguous interchange of electronic documents: ... > > The definition of these relationships makes OEBPS very niche specific, > electronic, electronic, electronic. This is fine and I do agree that it has > its place, but I think I would like to find a DTD/schema that is at the top > of the system. Though eBook has its uses as does DocBook, TEI and the DTD > provided with DITA. They are just a tools. I would just like to be certain > of using the tool that enables me to obtain any format, electronic or print. > I think I am saying that I see OEBPS as an optional presentational format, > just like HTML, only once you are in OEBPS its hard to move up the chain to > the parent format. > > What I think the world wants is a format that can be used to obtain all > subsets. I am convinced that DocBook does not do that and I continue to > explore new options. I want a DTD that enables support across all > applications, in effect a merger of all existing DTD/Scheme's and a few more > that I can think of. The trick is how to bring all this together and yet > still manageable. The answer I think lies in a loose technology that enables > the inclusion of all Root DTD/Scheme's. So a document can be part USUPTO, > part NewsML, part DocBook, part DITA, part TEI, part VoXML, part WSDL, part > WML, part any other DTD or subset and still validate. > > Naturally there would be a large and small deltas between all the > DTD/Scheme's. This I would consider the user core. Those parts that remain > unique to one or other DTD/Scheme shall possibly remain outside the > application of other DTD/Scheme's. So the common part of DocBook and TEI can > be seamlessly interchanged without having to create separate files. > > Complex, but then if was easy we would be doing it years ago. One True and Universal Textual Markup XML Vocabulary"! <laugh/> As a member of the Open eBook Forum Publication Structure Working Group (PSWG), which is charged with maintaining and updating OEBPS, I am happy to see that there is some discussion on XML-Doc of OEBPS and its role(s) in ebook publishing -- it definitely is "on-topic" since ebooks (and ebook reading devices) are becoming more and more mainstream for XML-based document presentation. [Any thoughts and opinions I give below are mine alone, and do not necessarily represent those of OeBF and/or PSWG.] Yes, I agree with many (but not all) of the insightful points raised by Sam Wheller. For example, OEBPS is definitely focused more on direct (and indirect) electronic presentation than on transformation into other vocabularies and for direct conversion into print. (In fact, I've lately been stating my belief that OEBPS, combined with MathML, SVG and XLink, has the potential to become the best general purpose direct ebook presentation format, far surpassing all other direct electronic presentation formats, including PDF. This is a topic for a different discussion which probably is off-topic to this group.) One little known aspect about OEBPS makes it much more powerful as a markup vocabulary than at first glance: its extensibility. Sure, the Basic OEBPS 1.2 markup vocabulary is a pure, selected subset of XHTML 1.1. However, document authors may use other tags in their OEBPS documents besides those in the Basic vocabulary. Documents using such custom tags are called "Extended OEBPS Documents". One restriction is that each custom element must be provided a CSS style rule (if CSS 'display' is not defined, then it is assumed 'display:inline'), thus it is not possible to assign all possible XHTML presentation constructs (such as <a>, <object>, <image> and <ol>/<ul>) to non-HTML tags: all that is supported is CSS-display 'inline', 'block', 'none', and table-related functions. For 'inline' and 'block' presentation, this is essentially equivalent to using XHTML <div> and <span> with classes. Despite OEBPS being capable of markup extensibility, it by-and-large is XHTML in presentation orientation, and thus is incompatible in a few ways with other document markup vocabularies such as TEI and DocBook. That is, it is difficult if not impossible to make a document simultaneously Extended OEBPS and TEI (or DocBook) conformant without the need for significant transformation between the two (e.g., via XSLT). For example, one area of "incompatibility" is in the handling of notes (e.g., footnotes, endnotes). In TEI, a note can be placed inline with the main text using the <note> element. In DocBook, there is, for example, the <footnote> element which works similarly to TEI <note>. In XHTML/OEBPS, notes are placed elsewhere (either in the same document or in a separate document) and are linked to using an anchor <a> tag or using XLink. (One could in OEBPS use something like a TEI <note> tag in "Extended OEBPS" mode and declare it CSS 'display:none', but then proper rendering of this note content requires a specific OEBPS Reading System be designed to render this tag, thus the OEBPS Publication author is not guaranteed that all other OEBPS Reading Systems, and HTML web browsers, would render it as desired.) Another major area of incompatibility is the general structure of the Publication itself. OEBPS essentially links (or more properly "knits") together one or more documents using a separate XML document (the Package file) to act as the "control center" for the Publication. In TEI and DocBook, as I best understand them, a lot of the features of the OEBPS Package are incorporated right into one content document itself, including publication metadata. In fact, the Package is the real innovation in OEBPS which brings a new dimension to Publication structuring not found in the TEI and DocBook paradigms, and this goes beyond solely for presentation purposes. For example, highly non-linear documents (such as experimental hypertext literature and general "help" type of documents) fit much better into OEBPS than into TEI/DocBook, the latter of which tend to be quite linear in orientation (which is understandable if the source for the markup, or the intended output, is highly linear such as print). And PSWG is now considering greatly improving the capability of OEBPS to handle such highly non-linear documents, as well as specifying how to "linearize" such non-linear publications for output on linear-only formats such as print, and to provide better navigation throughout such complex documents while meeting the important need for accessibility. Now, Sam Wheller brought up his thoughts that the current book markup vocabularies (including OEBPS, DocBook, and TEI) are not a good starting framework by which a publisher or document producer would be able to use as "Source" to create all other needed document formats, both print and electronic. I admit that I go back and forth on this issue, and in the near-future will probably continue to do so. <laugh/> At least in principle, especially with the rise of transformation tools such as XSLT and XSL:FO, I see hope that a future version of OEBPS may actually fit the bill. For example, properly structured OEBPS, where the markup strictly separates styling from structure ala DocBook/TEI by using a pre-defined div/span/class library, or simply adding extended tags to give TEI/DocBook-like document structure, may be considered a possible Source format since OEBPS is friendly to both direct electronic rendering of non-linear documents and to repurposing into more linear formats such as print. (As a variation on this theme, one can build a new DTD which would fit into the current OEBPS framework and do the structural things that TEI and DocBook now support. This is an area I would seriously explore -- if need be the current OEBPS specification can be tweaked to provide for better flexibility here.) Certainly it is now possible to work in the opposite direction. For example, as I understand it DocBook publications can be transformed into high presentation quality XHTML+CSS (and from there into OEBPS with very little extra effort.) Properly structured documents using a selected subset from TEI can likewise be transformed. The bottom line is that I am in sympathy with Sam Wheller's desire for a "Source" XML markup vocabulary for books and documents, from which everything else can be created. The current markup vocabularies of OEBPS, TEI, DocBook, etc., all tackle different aspects of this issue, and each provides interesting insights into what this Source vocabulary must accomplish. DocBook and TEI rightly focus on marking up only the structure/semantics of documents with total separation of styling away from the documents (OEBPS can do this just as well, but OEBPS 1.2 still allows the "freedom" for crappy markup practice since some styling markup is still allowed in documents, such as the yucky <i> and <b> tags and the deprecated "style" attribute.) OEBPS brings to the table much better compatibility with direct electronic rendering, very good content modularity (which is especially useful for non-linear publications), as well as the innovative Package construct so Publication parameters are kept separate from the content documents (one can strongly argue that what describes a "Publication" should be kept separate from the textual content modules/documents which comprise the Publication.) Just my $0.02 worth. Jon Noring - In fact, this is of interest: (yes, I've just spent the last fortnight submersed in installing Gentoo Linux here). -- Ian Tindale Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/xml-doc/conversations/topics/3740?l=1
CC-MAIN-2015-48
refinedweb
2,877
55.37
I would like to know how to adapt this code to give me X digits of pi when asked because right now it just prints a random amount while True: print("how many digits of pi would you like") def make_pi(): q, r, t, k, m, x = 1, 0, 1, 1, 3, 3 number = int(input()) for j in range(int(number)): digits = make_pi() pi_list = [] my_array = [] for i in make_pi(): my_array.append(str(i)) my_array = my_array[:1] + ['.'] + my_array[1:] big_string = "".join(my_array) print("here is the string:\n %s" % big_string) print('{:.xf}'.format(math.pi)) Where x is the number of digits of pi you want to print after the decimal. EDIT: IF you want to do it you way then big_string = ''.join(my_array[:x]) Where x is the number of characters including leading 3 and decimal point.
https://codedump.io/share/taZpiUhOpJi9/1/generating-pi-in-python
CC-MAIN-2017-09
refinedweb
138
76.56
0 I'm having quite a few errors on this one. This is my first attempt on "free" coding outside tutorials. I do believe it only has to do with bad use of brackets from my side, but i'm a beginner. Line 14 - Illegal start of expression Line 18 - Class, interface or enum expected Line 21 - Class, interface or enum expected public class Class_z { public static void main(String[]args){ int c = 5; int d = 10; int answer = true1(99); System.out.println(answer); if(c >= d){ //true static int true1(int a){ return a; } } else { System.out.println("False"); } }
https://www.daniweb.com/programming/software-development/threads/461193/program-illegal-start-of-expression
CC-MAIN-2017-22
refinedweb
101
62.07
Jochen De Smet 2011-05-27 A patch somewhere in march added a few calls to g_slist_free_full in sipe-media.c ; according to the glib docs here this function is only available in glib 2.28 and up. Configure is still set to check for just glib 2.12 or higher though. In my local copy I've added the following to provide g_slist_free_full for glib < 2.28, but maybe it's easier to just replace the two calls with a loop or something? --- ../siplcs/src/core/sipe-utils.h 2011-05-25 22:29:39.253221634 -0400 +++ src/core/sipe-utils.h 2011-05-26 10:32:34.390487634 -0400 @@ -464,3 +464,9 @@ * @return @c TRUE if the string represents AV conference URI */ gboolean sipe_utils_is_avconf_uri(const gchar *uri); + +#if GLIB_CHECK_VERSION(2,28,0) +#else +void g_slist_free_full (GSList *list, GDestroyNotify free_func); +#endif + --- ../siplcs/src/core/sipe-utils.c 2011-05-25 22:29:39.233221634 -0400 +++ src/core/sipe-utils.c 2011-05-26 10:32:26.670487634 -0400 @@ -612,6 +615,27 @@ return g_strstr_len(uri, -1, "app:conf:audio-video:") != NULL; } +#if GLIB_CHECK_VERSION(2,28,0) +#else +/** + * g_slist_free_full: + * @list: a pointer to a #GSList + * @free_func: the function to be called to free each element's data + * + * Convenience method, which frees all the memory used by a #GSList, and + * calls the specified destroy function on every element's data. + * + * Since: 2.28 +**/ +void +g_slist_free_full (GSList *list, + GDestroyNotify free_func) +{ + g_slist_foreach (list, (GFunc) free_func, NULL); + g_slist_free (list); +} +#endif + /* Local Variables: mode: c Stefan Becker 2011-05-27 g_slist_free_full() is only used in sipe-media.c. As sipe-media.c will never be compiled on obsolete platforms, I don't think this patch is necessary. Jochen De Smet 2011-05-28 Fair enough. In case anyone else is looking, binary packages for 2.28 for windows can be found at
http://sourceforge.net/p/sipe/discussion/688535/thread/7f9967d6
CC-MAIN-2015-32
refinedweb
308
60.21
Wait for 10 seconds. just count & sleep * * @author srasul * */ public class LightThread implements Runnable { public LightThread() { new Thread(this).start(); } @Override public void run() { Long l = 0l; Why does the `reset` command include a delay? Click to update results immediately. VisualVM is also connecting to it so there might be some small artifacts. TopThreads: you don't have to switch back to the Threads tab to get a stack of the thread; simply click on the thread and the stack trace will appear in the See Profiling With VisualVM, Part 1 and Profiling With VisualVM, Part 2 to get more information about profiling and how to set profiling roots and instrumentation filter. share|improve this answer edited Aug 17 '11 at 13:23 RivieraKid 5,48242745 answered May 31 '09 at 18:16 Huxi 3,4232228 add a comment| up vote 0 down vote This is a kind The usual suspect: the garbage collector In normal situations (whatever that me be… ), the CPU usage figure is approximately the same as the figures other tools report. Quote #13 by Amit at November 23rd, 2012 Nice plugin. more info at: Reply Ron 5 years ago for step 3 you can run: echo "obase=16;" | bc | tr [:upper:] [:lower:] Reply Java Coder 5 years ago Excellent Bro The JTop plugin comes with the JDK with source as a demo plugin for JConsole. Please let me know what you think, feedback is always welcome! Can time travel make us rich through trading, and is this a problem? Jconsole Top Threads They don't match the "tid" or "nid" identifiers I'm seeing in my thread dumps (after converting them to hex)? –Tom Nov 8 '13 at 10:41 add a comment| up vote 8 Quote #5 by Peter Doornbosch at October 3rd, 2011 On multi-processor systems, the CPU-usage can be greater than 100%, the percentage is relative to one CPU. Java Thread Cpu Usage I have personally used JProfiler for years with code from Java 1 on and found it to work very well. –Lawrence Dol Jun 1 '09 at 16:55 | show 2 more Very helpful. Of course that kind of tool doesn't do everything, it just help showing what part of the application must be improved, the improvement part is the task of the developer and How to suspend VM on startup when remote debugging your Java app Assigning a reference to itself in Java has no effect, right? - WRONG! Click to run garbage collection. Java High Cpu Usage Linux Attach JVisualVM to your app. Visualvm Profiler Download share|improve this answer answered May 31 '09 at 1:33 Lawrence Dol 39.2k20109159 1 Even with JConsole(bundled with Java 1.5 and up) you can do the same. –adrian.tarau May 31 '09 Movie about a girl who had another different life when she dreamed How to deal with an intern's lack of basic skills? Although this is not always the case - more about that in a minute - it's at least a good indication whether the process is busy or idle. A more interesting tab is the "Monitor" tab : This tab follow the CPU and Memory usages of your applications. Ltd. Jstack Cpu Usage The next graph display the total number of classes loaded in the application and the last one displays the number of threads currently running. The man page for top says: -H : Threads toggle Starts top with the last remembered 'H' state reversed. Save Current View. After a time they disappear. How do you find out which thread? Java High Cpu Usage Windows How to help reduce students' anxiety in an oral exam? To conclude, I have to say that this profiler is really simple but also really powerful to use. share|improve this answer answered May 31 '09 at 4:53 talonx 1,0231822 add a comment| up vote 1 down vote Are you running Java 6 on a multi-core computer? The ‘MyThread-1005‘ thread will not enter sleep part, so it will run in while loop without sleep. And 28938 in hex is 0x710A. Convert Pid To Hex jconsole -pluginpath “c:\topthreads.jar” Thanks, Paparao Paul K March 20th, 2013 at 9:36 pm Thank you!!! If you enable this feature (settings -> show process cpu usage too), the top row of the table shows the CPU usage of the process as a whole. During the instrumentation, the application will be blocked. share|improve this answer answered May 31 '09 at 1:36 jdigital 9,05312143 We just used this approach for locating a Thread loop and it worked fine. weblink It's based on a command-line equivalent from Bruce Chapman which might also be useful. Learn moreor watch a related video or download Applications Manager now. - Ramesh Related Posts:Managing Windows Phone Using Desktop Central -…Windows Phone Support Added to Desktop Central's MDM…MIND INfotech, a CMM Came across this helpful recommendation of JConsole and JVisualVM. PRachi July 12th, 2012 at 11:11 am HI FRom where to download JTop. It's essentially thread dumps with a GUI. I followed this article. It turned out to be quite usefull and from the responses I got, I can tell people find it still usefull today. Profiling applications The Profiler tab of an application enables you to start and stop the profiling session of a local application. Source code is currently not in a state suitable for publishing, sorry about that. On a side note, it would be nice to be able to check more threads to see all their stacktraces at the same time. You can see ‘Show threads on' message in the top console. And now, I think the most interesting tab, is the Profiler one : When you open this tab first, it contains no information at all. It comes with the JDK. it is possible for an idle thread pool to look like it is consuming CPU to a profiler (which it is not using as much as suggested) –Peter Lawrey Jun 11 Also you should pay attention to which thread you are in, usually you will want to look at the whatever is your "main" thread.
http://juicecoms.com/cpu-usage/java-high-cpu-usage-linux.html
CC-MAIN-2017-51
refinedweb
1,030
69.52
I just started to experiment with the Session module and came up with a minimalistic but complete example using the publisher handler. There are two files, login.py asks for a password to log the user in and another one which checks if the user was already logged in. In case he/she isn't the request is redirected to login.py. There are a couple of security issues with this solution of course but the point is only to give a toy model demonstrating how this mechanism could in principle work. The notation assumes a SetHandler apache directive, with AddHandler one needs to refer to the scripts as 'login.py' and 'test.py' not just 'login' or 'test'. Please let me know what the experts think since I wouldn't want to cause more harm than good by posting a silly FAQ entry :) # this is our login page, login.py from mod_python import Session, util def index( req ): session = Session.Session( req ) if not session.is_new( ): return 'You are already logged in.' form = """<html><form enctype="multipart/form-data" method="POST" action="login"> <input type=text<br> <input type='submit' name='go' value='Go'> </form></html> """ try: secret = req.form[ 'secret' ] except KeyError: return form if secret == 'my_dear_password': session.save( ) return 'Password correct, now you are logged in.' else: return form # end of login.py and the other file is: # this is test.py from mod_python import Session, util def index( req ): session = Session.Session( req ) if session.is_new( ): util.redirect( req, 'login' ) return else: return 'You are logged in.' # end of test.py
http://modpython.org/pipermail/mod_python/2006-April/020922.html
CC-MAIN-2017-51
refinedweb
265
68.57
Crunching data and rearranging data in Python is cool, but I really need to visualize it. Nothing fancy, just a bar or line chart. I recently saw a D3 implementation in Python – awesome, but for now I just want to stick to Matplotlib. I grabbed a few books on scientific python and data in python. They seem to love IPython – the web notebook is pretty cool. Another tool that is often mention is Pandas. This is what I want to use – it uses Matplotlib for plotting. The one feature that caught my attention right away was the DataFrame. Think of it as a Excel spreadsheet, then if anyone asks about it, tell them it’s like data.frame() in R. When I learn something, I like to start bare bones, then build it up with extra options and variations. I have put together some very minimal examples of plotting DataFrames and a Series in Pandas. From here you should have a good grasp of how to do more. Plotting a Series requires a Series and a type of chart. Here is my code: from pandas import Series import matplotlib.pyplot as plt b=[2,4,6,8,10] a=Series(b,index=['a','b','c','d','e']) Series.plot(a, kind=’bar’) #change to ‘barh’ for horizontal. Can also declare kind=’line’ plt.show() Plotting a DataFrame is what I need the most in my work. Earlier I compared a DataFrame to an Excel spreadsheet. Here is what a DataFrame looks like: Looking at the DataFrame and the Chart, notice that each row plots as a group labeled by the index and columns. The DataFrame is created by passing a Numpy Array.: a=np.array([[3,6,8,9,6],[2,3,4,5,6],[4,5,6,7,8],[3,6,5,8,6],[5,8,8,6,5]]) df=DataFrame(a, columns=['a','b','c','d','e'], index=[2,4,6,8,10]) To plot the chart, just call plot and pass a type. Here is the complete code: from pandas import DataFrame import matplotlib.pyplot as plt import numpy as np a=np.array([[3,6,8,9,6],[2,3,4,5,6],[4,5,6,7,8],[3,6,5,8,6],[5,8,8,6,5]]) df=DataFrame(a, columns=['a','b','c','d','e'], index=[2,4,6,8,10]) df.plot(kind=’bar’) plt.show() This is how I learned to use Pandas DataFrame and to plot my data. Knowing this, I felt much more comfortable looking at more advanced examples online. 2 Responses to “Pandas and Python…..Oh My.”
https://paulcrickard.wordpress.com/2012/12/21/pandas-and-python-oh-my/
CC-MAIN-2014-15
refinedweb
438
76.72
Using the Xamarin.Android Designer This article is a walkthrough of the Xamarin.Android Designer. It demonstrates how to create a user interface for a small color browser app; this user interface is created entirely in the Designer. Overview Android user interfaces can be created declaratively by using XML files or programmatically by writing code. The Xamarin.Android Designer allows developers to create and modify declarative layouts visually, without requiring hand-editing of XML files. The Designer also provides real-time feedback that lets the developer evaluate UI changes without having to redeploy the application to a device or to an emulator. These Designer features can speed up Android UI development tremendously. This article demonstrates how to use the Xamarin.Android Designer to visually create a user interface. Tip Newer releases of Visual Studio support opening .xml files inside the Android Designer. Both .axml and .xml files are supported in the Android Designer. Walkthrough The objective of this walkthrough is to use the Android Designer to create a user interface for an example color browser app. The color browser app presents a list of colors, their names, and their RGB values. You'll learn how to add widgets to the Design Surface as well as how to lay out these widgets visually. After that, you'll learn how to modify widgets interactively on the Design Surface or by using the Designer's Properties pane. Finally, you'll see how the design looks when the app runs on a device or emulator. Creating a new project The first step is to create a new Xamarin.Android project. Launch Visual Studio, click New Project..., and choose the Visual C# > Android > Android App (Xamarin) template. Name the new app DesignerWalkthrough and click OK. In the New Android App dialog, choose Blank App and click OK: Adding a layout The next step is to create a LinearLayout that will hold the user interface elements. Right-click Resources/layout in the Solution Explorer and select Add > New Item.... In the Add New Item dialog, select Android Layout. Name the file list_item and click Add: The new list_item layout is displayed in the Designer. Notice that two panes are displayed – the Design Surface for the list_item is visible in the left pane while its XML source is shown on the right pane. You can swap the positions of the Design Surface and Source panes by clicking the Swap Panes icon located between the two panes: From the View menu, click Other Windows > Document Outline to open the Document Outline. The Document Outline shows that the layout currently contains a single LinearLayout widget: The next step is to create the user interface for the color browser app within this LinearLayout. Creating the List Item user interface If the Toolbox pane is not showing, click the Toolbox tab on the left. In the Toolbox, scroll down to the Images & Media section and scroll down further until you locate an ImageView: Alternately, you can enter ImageView into the search bar to locate the ImageView: Drag this ImageView onto the Design Surface (this ImageView will be used to display a color swatch in the color browser app): Next, drag a LinearLayout (Vertical) widget from the Toolbox into the Designer. Notice that a blue outline indicates the boundaries of the added LinearLayout. The Document Outline shows that it is a child of LinearLayout, located under imageView1 (ImageView): When you select the ImageView in the Designer, the blue outline moves to surround the ImageView. In addition, the selection moves to imageView1 (ImageView) in the Document Outline: Next, drag a Text (Large) widget from the Toolbox into the newly-added LinearLayout. Notice that the Designer uses green highlights to indicate where the new widget will be inserted: Next, add a Text (Small) widget below the Text (Large) widget: At this point, the Designer surface should resemble the following screenshot: If the two textView widgets are not inside linearLayout1, you can drag them to linearLayout1 in the Document Outline and position them so they appear as shown in the previous screenshot (indented under linearLayout1). Arranging the user interface The next step is to modify the UI to display the ImageView on the left, with the two TextView widgets stacked to the right of the ImageView. Select the ImageView. In the Properties window, enter width in the search box and locate Layout Width. Change the Layout Width setting to wrap_content: Another way to change the Width setting is to click the triangle on the right-hand side of the widget to toggle its width setting to wrap_content: Clicking the triangle again returns the Width setting to match_parent. Next, go to the Document Outline pane and select the root LinearLayout: With the root LinearLayout selected, return to the Properties pane, enter orientation into the search box and locate the Orientation setting. Change Orientation to horizontal: At this point, the Designer surface should resemble the following screenshot. Notice that the TextView widgets have been moved to the right of the ImageView: Modifying the spacing The next step is to modify padding and margin settings in the UI to provide more space between the widgets. Select the ImageView on the Design surface. In the Properties pane, enter min in the search box. Enter 70dp for Min Height and 50dp for Min Width: In the Properties pane, enter padding in the search box and enter 10dp for Padding. These minHeight, minWidth and padding settings add padding around all sides of the ImageView and elongate it vertically. Notice that the layout XML changes as you enter these values: The bottom, left, right, and top padding settings can be set independently by entering values into the Padding Bottom, Padding Left, Padding Right, and Padding Top fields, respectively. For example, set the Padding Left field to 5dp and the Padding Bottom, Padding Right, and Padding Top fields to 10dp: Next, adjust the position of the LinearLayout widget that contains the two TextView widgets. In the Document Outline, select linearLayout1. In the Properties window, enter margin in the search box. Set Layout Margin Bottom, Layout Margin Left, and Layout Margin Top to 5dp. Set Layout Margin Right to 0dp: Removing the default image Because the ImageView is being used to display colors (rather than images), the next step is to remove the default image source added by the template. Select the ImageViewon the Designer Surface. In Properties, enter src in the search box. Click the small square to the right of the Src property setting and select Reset: This removes android:src="@android:drawable/ic_menu_gallery" from the source XML for that ImageView. Adding a ListView container Now that the list_item layout is defined, the next step is to add a ListView to the Main layout. This ListView will contain a list of list_item. In the Solution Explorer, open Resources/layout/activity_main.axml. In the ToolBox, locate the ListView widget and drag it onto the Design Surface. The ListView in the Designer will be blank except for blue lines that outline its border when it is selected. You can view the Document Outline to verify that the ListView was added correctly: By default, the ListView is given an Id value of @+id/listView1. While listView1 is still selected in the Document Outline, open the Properties pane, click Arrange by, and select Category. Open Main, locate the Id property, and change its value to @+id/myListView: At this point, the user interface is ready to use. Running the application Open MainActivity.cs and replace its code with the following: using Android.App; using Android.Widget; using Android.Views; using Android.OS; using Android.Support.V7.App; using System.Collections.Generic; namespace DesignerWalkthrough { [Activity(Label = "@string/app_name", Theme = "@style/AppTheme", MainLauncher = true)] public class MainActivity : AppCompatActivity { List<ColorItem> colorItems = new List<ColorItem>(); ListView listView; protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); // Set our view from the "main" layout resource SetContentView(Resource.Layout.activity_main); listView = FindViewById<ListView>(Resource.Id.myListView); colorItems.Add(new ColorItem() { Color = Android.Graphics.Color.DarkRed, ColorName = "Dark Red", Code = "8B0000" }); colorItems.Add(new ColorItem() { Color = Android.Graphics.Color.SlateBlue, ColorName = "Slate Blue", Code = "6A5ACD" }); colorItems.Add(new ColorItem() { Color = Android.Graphics.Color.ForestGreen, ColorName = "Forest Green", Code = "228B22" }); listView.Adapter = new ColorAdapter(this, colorItems); } } public class ColorAdapter : BaseAdapter<ColorItem> { List<ColorItem> items; Activity context; public ColorAdapter(Activity context, List<ColorItem> items) : base() { this.context = context; this.items = items; } public override long GetItemId(int position) { return position; } public override.list_item, null); view.FindViewById<TextView>(Resource.Id.textView1).Text = item.ColorName; view.FindViewById<TextView>(Resource.Id.textView2).Text = item.Code; view.FindViewById<ImageView>(Resource.Id.imageView1).SetBackgroundColor(item.Color); return view; } } public class ColorItem { public string ColorName { get; set; } public string Code { get; set; } public Android.Graphics.Color Color { get; set; } } } This code uses a custom ListView adapter to load color information and to display this data in the UI that was just created. To keep this example short, the color information is hard-coded in a list, but the adapter could be modified to extract color information from a data source or to calculate it on the fly. For more information about ListView adapters, see ListView. Build and run the application. The following screenshot is an example of how the app appears when running on a device: Summary This article walked through the process of using the Xamarin.Android Designer in Visual Studio to create a user interface for a basic app. It demonstrated how to create the interface for a single item in a list, and it illustrated how to add widgets and lay them out visually. It also explained how to assign resources and then set various properties on those widgets. Feedback
https://docs.microsoft.com/en-us/xamarin/android/user-interface/android-designer/designer-walkthrough
CC-MAIN-2020-10
refinedweb
1,612
54.63
This document describes the schema available from the SKOS namespace. The document is still under development and content may be subject to change. The Simple Knowledge Organization System (SKOS) is a common data model for sharing and linking knowledge organization systems via the Semantic Web.This document provides a brief description of the SKOS RDF Schema. For detailed information about the SKOS Recommendation, please consult the SKOS Reference [SKOS-REFERENCE] or the SKOS Primer [SKOS-PRIMER]. The SKOS data model expressed as RDF triples (in so far as possible) is given in the following resource: This RDF schema defines an OWL ont $
http://www.w3.org/TR/2008/WD-skos-reference-20080829/skos.html
CC-MAIN-2015-18
refinedweb
102
54.73
Difference between revisions of "Janitorial tasks" Revision as of 09:40, 6 January 2012 recognized. The exception is <2geom/forward.h>, due to heavy use of templates in the 2Geom library. (BPF) The C++ FAQ Sheet explains how to code forward declarations (classes that both need to know about each other); To fully understand why, you should study the Pimpl idiom. Source formatting Header - Source files should use four spaces as indentation and no tabs. Trailing whitespace should be removed. - The comment at the top of each file should have the following format. The author information is in a regular multiline comment so that it is omitted in the generated documentation. Author emails can be obfuscated, but should be real addresses. - Using a Doxygen comments */ Again, note that the comment does not start with "/**",. but only with "/*" @file Command - Modern C++ code should avoid global and static variables, functions, enums, etc. Legacy code migrated from C may still contain these, so will require the use of a @file command. - Note that the @file command will only be required for files that have non-class non-namespaced globals or statics. As our codebase moves to the more modern C++ practices, use of these will be reduced and removed. - Legacy files that contain a mix of functions probably warrant use of a @file command. - If feasible, moving statics to anonymous namespaces instead is preferable to adding a @file command. - Any documented entities in namespaces, of local classes, etc., will be processed even if a @file command is not present in the source file. An example of a legacy source file using the @file command: /** * @file * Logarithmic time traveling salesman solver. */ /* * Authors: * J. God Hacker <ihatepizza@gurus.org> * Ellen Epic <epicwin at email dot com> * * Copyright (C) 2006-2008 Authors * Released under GNU GPL, read the file 'COPYING' for more information */ Note the following: - The opening doc comment is merely "/**" on a line by itself. Keeping the rest to subsequent lines aids legibility and revision tracking. - The @file command is on a line by itself, with nothing following. This is required to allow Doxygen to automatically extract the current filename. - The short description of the file contents (that follows starting on the line after @file) ends with a period. All short (aka "brief") descriptions should end with a period. - The end of the doc comment and the start of the normal comment (with authors) are on separate lines. Avoid collapsing them to "*//*" - Statement Style - and have an #ifdef guard.: - When required for legacy needs, @file comment with a short description of the file's contents - Include guard (headers only) - System includes - Local includes - Forward declarations - Class declarations - Function declarations - Global variable declarations (note: global variables should be avoided) - End of include guard (headers only) - Emacs local variables block - Vim modeline Documentation Make Comments Meaningful - Some documentation is useless, for example "constructor" or "destructor". Such comments mark the entity as documented, when in fact it's not. Remove them. @brief Command - The @brief command comes from the more complex documentation format implemented by Trolltech before Doxygen was created. When the @brief command is skipped, Doxygen will use the first sentence (ending with a dot) as the brief description. An alternative is to put the description in a single-line comment. These two techniques can be used to reduce the number of Doxygen commands. In the example below, all three functions will have the same documentation. The first case depends on the variable JAVADOC_AUTOBRIEF being set to true, which is a main setting for Inkscape documenation: /** * Something useful. * This function does something very useful. * Here is its more detailed, longer description. */ void useful_function_two(); /// Something useful /** * This function does something very useful. * Here is its more detailed, longer description. */ void useful_function_one(); /** * @brief Something useful. * This function does something very useful. * Here is its more detailed, longer description. */ void useful_function_three(); The use of @brief in Inkscape code comments is discouraged as redundant and overly verbose.. - dynamic_cast for downcasting to derived class type. Note that this is not needed to upcast to a parent type. - reinterpret_cast if the conversion does not compile with static_cast, for example pointer to integer. Note that reinterpret_cast<...>(...) should be the cast of last resort..
https://wiki.inkscape.org/wiki/index.php?title=Janitorial_tasks&diff=prev&oldid=76436
CC-MAIN-2020-05
refinedweb
701
57.16
Not too long ago, Llewellyn Falco posted Using ApprovalTests in .Net 19 Email, where he describes a really easy way to test email messages using ApprovalTests. The video describes a testing seam that separates message creation from message sending, and this makes testing email straightforward. If you are currently working with .NET source, then you really should follow the simple instructions in that video and stop reading this post, it’s not for you. This post is for you if: - You don’t control the source of the email you want to test. - You control the source, but it’s not .NET. - You control the source in theory, but you can’t change it (eg. boss says no) Context In my case, I’m moving a little reporting script from Perl to .NET. Lets call the script “report.pl” for the sake of discussion. We want to move from Perl to .NET using TDD and without any noticeable changes to the users (which happen to be the Executive Committee where I work). I don’t want to create a system that works more or less the same, I want one that works exactly the same. I need to lock down the current system, and use it as the gold standard for the new system. Here’s what the current script looks like with the responsibilities color-coded: The blue blocks are responsibilities that are implementation specific, when the script wakes up the first thing it does is read its config and validate its environment. Its not likely that these tasks will transfer to a new system. The script creates an Excel Workbook, this responsibility is identified by green blocks. The script reads its data from a database, this responsibility is identified by red blocks. The script manipulates the data before writing it to the Workbook, these blocks are purple. The script sends the Workbook to its recipients over email, this block is yellow. This is the first block I’ll work on, since it’s the only one that isn’t interwoven with other responsibilities. Strategy 1. Identify a responsibility 2. Let’s assume that this responsibility was encapsulated in a subroutine. Copy that subroutine to a new Perl script “sendReport.pl”. If the responsibility was not encapsulated in a subroutine, make sure to wrap the lines of code you copy with a sub{} in sendReport.pl. 3. Use Perl’s “require” mechanism to include “SendReport.pl” in “Report.pl” and remove the local definition of the “sendReport” subroutine. The script will call the imported definition instead. 4. Create a third Perl script “SendReportRunner.pl” which is just a thin shell around the extracted responsibility, which will let me execute the responsibility with any parameters I like. 5. Create a unit test in C# that uses a Process object to invoke SendReportRunner.pl. Notice that Report.pl is no longer in the picture. 6. Capture the output in an ApprovalTest. Because SendReport.pl actually wants to send email over SMTP, this is where the plan starts to go off the rails, but we can work through it. 7. Build C# implementation (Report.dll) that passes same ApprovalTest. Once we have a C# implementation, we can create a seam that separates the message creation from the message sending, and use EmailApprovals, just like Llewellyn described, but getting past step 6 will be tricky. 8. Now we need to get Report.dll working with Report.pl. I’ll create C# shell that invokes Report.dll from the command line. 9. Replace sendReport() call in Report.pl with a system() call to to ReportRunner.exe 10. Repeat until Report.pl is just a bunch of blue blocks containing system() calls to C# code runners. 11. Replace Report.pl with a C# executable, PowerShell script or whatever. Moving the responsibility down to Step 4 was relatively easy. I spent some tedious hours tracing variables to determine their scope and effect. Then I spent half a day wresting with the COM object the Perl script uses to send the mail. At the end of the day I could use SendReportRunner.pl to send emails and catch them with smtp4dev. But to get past Step 6 I needed to answer the question: How do I get those messages into a .NET unit test so I can capture and approve them? Catching Email Smtp4dev is a nice little application that fills a similar niche to CassiniDev a webserver I wrote about previously. Smtp4dev sits in your system tray and listens on the SMTP port for incoming mail. When it gets a message, it logs the message arrival in it’s window and you you can double click the message to see it in your default email program: Since smtp4dev lives on CodePlex, I figured there was a good chance that it was written in .NET and sure enough it was written in C#. Thinking back to CassiniDev, I wondered if there was a way I could host smtp4dev in my unit test, catch the messages from the Perl process, and then hand them over to ApprovalTests. I grabbed the source for the project and found an example named “SimpleServer’’ that looked like it could be used to create a test fixture similar to the CassiniDev fixture I used when testing MVC views. I created an empty class library and a test project to go with it. The test project will need ApprovalTests and a reference to Rnwood.SmtpServer, which is the server that powers smtp4dev. The server wasn’t on NuGet yet, so I put it there and used NuGet to add both references. The pattern for creating the test fixture was nearly the same as creating a fixture for CassiniDev: using Microsoft.VisualStudio.TestTools.UnitTesting; using Rnwood.SmtpServer; namespace Report.Tests { [TestClass] public class SmtpFixture : DefaultServer { public SmtpFixture(Ports port) : base(port) { } [TestInitialize] public void StartServer() { this.Start(); } [TestCleanup] public void StopServer() { this.Stop(); } } } To implement a test, I’ll extend this fixture. In the broad strokes, we want something like this: public SendReportTest() : base(Ports.SMTP) { } [TestMethod] public void SendReportOverEmail() { try { this.MessageReceived += CatchMessage; GenerateMessage(); ApprovalTests.Email.EmailApprovals.Verify("??"); } finally { this.MessageReceived -= CatchMessage; } } I specify the default SMTP port in the constructor, I could have used “Ports.AssignAutomatically” and the server would pick an empty port. That’s nice functionality, but the Perl script wants to use the default port. I’ve declared some methods but not implemented them, and I’m still not sure what I’m going to give to ApprovalTests. When we get a MessageReceivedEvent, it will come with a MessageEventArgs and we need to figure out if we can somehow get a MailMessage from that, which is what EmailApprovals is expecting from us. CatchMessage needs to do that for us. We also need to generate a message. I’ll do that first, since once I can generate and catch messages I’ll be able to look at a live instance of MessageEventArgs and see what its guts look like. Our goal is for the message to come from Perl, but for the moment, we’ll just use an SmtpClient to stand in for the script. Notice that we pass the fixture’s port number to the SmtpClient, if we were using a random port, this would ensure that we actually send it to the right place. private void GenerateMessage() { using (var client = new SmtpClient("localhost", this.PortNumber)) { using (var message = new MailMessage( "noreply@localhost", "jim@localhost", "Hello World", "Well, you caught me.")) { client.Send(message); } } } Implementing CatchMessage gave me some pause. If I use a lambda, I can’t easily unsubscribe from the event. Maybe that doesn’t matter in the context of a test, but it’s a bad idea to leave events attached, and I don’t want to be in the habit. I could unsubscribe safely I had a regular method, but then I need some plumbing to get the data back to the test method. I thought about it for a minute or two and decided to create a class to handle the event. Later on this turned out to be a pretty good decision, because I was able to substitute some special logic to handle the Perl message in the catcher class without obscuring the test intention. The basic MessageCatcher just needs to handle the event and store the message data. Then we can create one of these in our test and use it there. public class MessageCatcher { public IMessage Message { get; private set; } public void CatchMessage(object sender, MessageEventArgs e) { this.Message = e.Message; } } [TestMethod] public void SendReportOverEmail() { var catcher = new MessageCatcher(); try { this.MessageReceived += catcher.CatchMessage; GenerateMessage(); ApprovalTests.Email.EmailApprovals.Verify(catcher.Message); } finally { this.MessageReceived -= catcher.CatchMessage; } } But it turns out that the IMessage interface is not what we want, because it’s not what EmailApprovals wants, and its not convertible into a MailMessage. At the moment it doesn’t look like we can use EmailApprovals, but that doesn’t mean we can’t use ApprovalTests. The SimpleServer example code shows how to dump the IMessage to a eml file: // If you wanted to write the message out to a file, then could do this... File.WriteAllBytes("myfile.eml", e.Message.GetData()); It turns out that *.eml is just a fancy name for “text file”. I don’t want to dump it to the file system if I can avoid it. Since GetData() returns a Stream, I should be able to read it directly. public class MessageCatcher { public string Message { get; private set; } public void CatchMessage(object sender, MessageEventArgs e) { using (var reader = new StreamReader(e.Message.GetData())) { this.Message = reader.ReadToEnd(); } } } Then I can update my test to be an ordinary Approval instead of an EmailApproval. Since I think I’m about ready to run this, I add a FileLauncherReporter. [TestMethod] [UseReporter(typeof(FileLauncherReporter))] public void SendReportOverEmail() { var catcher = new MessageCatcher(); try { this.MessageReceived += catcher.CatchMessage; GenerateMessage(); ApprovalTests.Approvals.Verify(catcher.Message); } finally { this.MessageReceived -= catcher.CatchMessage; } } The test run completes and notepad launches: This is both really cool, and kind of a bummer. Its really cool because I can (in theory) catch the Perl script’s messages and use them as a baseline for developing my C# implementation. Although, before moving on, I see one thing I need to take care of, and that is the timestamp in the middle of the message. A little regex should take care of that: public void CatchMessage(object sender, MessageEventArgs e) { using (var reader = new StreamReader(e.Message.GetData())) { this.Message = Regex.Replace( reader.ReadToEnd(), @"Date:\s[\d\s\w,:-]+\d+\r\n", string.Empty); } } The bigger disappointment is that notepad launched at all. When Llewellyn used a FileLauncherReporter in his video, Thunderbird launched. That was cool. I’m jealous. Luckily ApprovalTests is open source so I can go see how de did that. Turns out to be pretty simple, we just need to make sure that when ApprovalTests saves the received file, it uses the .eml extension. To do this, I make a small change to the way I call Verify(). [TestMethod] [UseReporter(typeof(FileLauncherReporter))] public void SendReportOverEmail() { var catcher = new MessageCatcher(); try { this.MessageReceived += catcher.CatchMessage; GenerateMessage(); Approvals.Verify(new ApprovalTextWriter(catcher.Message, "eml")); } finally { this.MessageReceived -= catcher.CatchMessage; } } And now the file launches in my default mail client, which happens to be Outlook. Catching Perl Now that I understand how to catch mail using Rnwood.Smtpserver, and my childish need to see my message in an email client is satisfied, I can get this working with Perl. I’m going to create a PerlMessageGenerator class for that. public class PerlMessageGenerator : IMessageGenerator { private const string MissingPerlMessage = "You must have a 32-bit perl at [{0}]. Please visit to acquire Perl."; private const string PerlPath = @"C:\Perl\bin\perl.exe"; public PerlMessageGenerator() { if (!File.Exists(PerlPath)) { throw new InvalidOperationException(MissingPerlMessage.FormatWith(PerlPath)); } } public void GenerateMessage(string host, string to, string attachementPath) { var binPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); var arguments = "sendReportRunner.pl {0} {1} {2}".FormatWith(host, to, attachementPath); var pi = new ProcessStartInfo(PerlPath, arguments) { UseShellExecute = false, WorkingDirectory = binPath, CreateNoWindow = true }; using (var p = new Process { StartInfo = pi }) { p.Start(); p.WaitForExit(); } } } Now I just need to get my scripts into place by adding them as linked files to my test project, with “Copy To Output Directory” set to “Copy Always.” This actually works, the test catches the Perl message, but as I mentioned the Perl output needs some additional scrubbing over and above the simple message. SendReport.pl adds another timestamp in the subject line, a Message-ID field that varies on each run, and because it has an attachment, there are MIME boundaries that need to be ditched. I’ll spare you the gory details. The important part is that we caught the message. After creating a separate PerlMessageCatcher to handle all the special cases, my test passes consistently in Visual Studio. Just for kicks, and because this will eventually be production code, I turn on NCrunch. And I’m very happy to see that the test passes under NCrunch as well. Here’s the final test class: [TestClass] public class SendReportTest : SmtpFixture { public SendReportTest() : base(Ports.SMTP) { } [TestMethod] public void SendReportOverEmail() { var catcher = new PerlMessageCatcher(); try { this.MessageReceived += catcher.CatchMessage; new PerlMessageGenerator().GenerateMessage("localhost", "jim@contoso.com", "sendreport.pl"); Approvals.Verify(new ApprovalTextWriter(catcher.Message, "eml")); } finally { this.MessageReceived -= catcher.CatchMessage; } } } That’s probably enough for one day. I’ve made it past Step 6 in my porting list. PerlMessageCatcher is pretty twisted code and could use some refactoring. On the other hand, once I make it to Step 9, the (as yet non-existent) .NET implementation will be the canonical implementation, and I can simply use EmailApprovals directly. The need for the PerlMessageCatcher will go away, so perhaps getting to Step 9 is a more worthy goal than refactoring the catcher. We’ll see.
https://ihadthisideaonce.com/tag/smtp/
CC-MAIN-2019-09
refinedweb
2,315
66.13
send, sendto, sendmsg - send a message from a socket #include <sys/types.h> #include <sys/socket.h> int sendmsg(int s, const struct msghdr *msg, int flags); sendmsg(2) is used to transmit a message to another socket. sendmsg(2) may be used at any time, the socket does not have to be connected.msg(2). Locally detected errors are indicated by a return value of -1. When the message does not fit into the send buffer of the socket, sendmsg(2). occur in Linux, packets are just silently dropped when a device queue overflows.) 4.4BSD, SVr4, POSIX 1003.1g draft (these function calls appeared in 4.2BSD). MSG_CONFIRM). fcntl(2), recv(2), select(2), getsockopt(2), sendfile(2), socket(2), write(2), socket(7), ip(7), tcp(7), udp(7), send(2), sendto(2) 13 pages link to sendmsg(2): lib/main.php:944: Notice: PageInfo: Cannot find action page
http://wiki.wlug.org.nz/sendmsg(2)?action=PageInfo
CC-MAIN-2015-22
refinedweb
152
60.21
/* * Copyright (c) 1996TreeScanner.h */ #ifndef _BTREESCANNER_H_ #define _BTREESCANNER_H_ #include "BTreePrivate.h" // btree node scanner buffer size. Joe Sokol suggests 128K as a max (2002 WWDC) enum { kCatScanBufferSize = (128 * 1024) }; /* BTScanState - This structure is used to keep track of the current state of a BTree scan. It contains both the dynamic state information (like the current node number and record number) and information that is static for the duration of a scan (such as buffer pointers). NOTE: recordNum may equal or exceed the number of records in the node number nodeNum. If so, then the next attempt to get a record will move to a new node number. */ struct BTScanState { // The following fields are set up once at initialization time. // They are not changed during a scan. u_int32_t bufferSize; void * bufferPtr; BTreeControlBlock * btcb; // The following fields are the dynamic state of the current scan. u_int32_t nodeNum; // zero is first node u_int32_t recordNum; // zero is first record BTNodeDescriptor * currentNodePtr; // points to current node within buffer int32_t nodesLeftInBuffer; // number of valid nodes still in the buffer int64_t recordsFound; // number of leaf records seen so far }; typedef struct BTScanState BTScanState; /* *********************** PROTOTYPES *********************** */ int BTScanInitialize( const SFCB * btreeFile, BTScanState * scanState ); int BTScanNextRecord( BTScanState * scanState, void * * key, void * * data, u_int32_t * dataSize ); int BTScanTerminate( BTScanState * scanState ); #endif /* !_BTREESCANNER_H_ */
http://opensource.apple.com/source/diskdev_cmds/diskdev_cmds-491.3/fsck_hfs.tproj/dfalib/BTreeScanner.h
CC-MAIN-2014-15
refinedweb
210
60.75
play-1.3.0 leaks database connections when configured for multiple databases Reported by Tobin Stelling | April 6th, 2015 @ 12:47 PM | in 1.3.1 (closed) Framework version: 1.3.0 Platform you're using: linux, java-1.7.0_75, postgresql 9.2 We recently upgraded our apps to play-1.3.0 and noticed a few instances where Play leaks database connections. You can observe the leak with C3P0's management bean numBusyConnections. The count can increment to your maxPool size. Once it has reached maxPool size, Play will return 500 error pages (because c3p0 will time out waiting to checkout a connection). Open jvisualvm and look at the numBusyConnections mbean for your "other_database" connection pool. Invoke Application.index() a few times. Notice how each time you invoke it, your numBusyConnections increments by one and never goes back down.Open jvisualvm and look at the numBusyConnections mbean for your "other_database" connection pool. Invoke Application.index() a few times. Notice how each time you invoke it, your numBusyConnections increments by one and never goes back down. public class Application extends Controller { public static void index() { JPA.closeTx(JPA.DEFAULT); renderText("Connection leaked!"); } } Let's walk through what happens when JPA.closeTx(JPA.DEFAULT) is called. The transaction is open, so it proceeds and retrieves the correct EntityManager from our JPA.currentEntityManager threadlocal. But then it calls DB.getConnection() instead of DB.getConnection(name). Because we are working on JPA.DEFAULT, this is fine (DB.getConnection() calls DB.getConnection(JPA.DEFAULT)). However, it could be problematic if we call JPA.closeTx("other_database"). However this is not the exact source of the leak. Next, the method tests if the transaction for our EntityManager is active and behaves appropriately. The problem is in the finally{} block: it calls JPA.clearContext(). JPA.clearContext() nukes your JPA.currentEntityManager threadlocal completely. In this case this is not what we want to have happen. We have finished the transaction for our JPA.DEFAULT EntityManager and closed the EntityManager itself, but we have not done this for our "other_database" EntityManager which is still open and still has an active transaction. Because we nuke all entries from JPA.currentEntityManager, our EntityManager for "other_database" goes into limbo and is never closed. As far as I can tell its transaction remains forever open. Furthermore, we probably do not want to close out the EntityManager for "other_database," because we called JPA.closeTx(JPA.DEFAULT), not JPA.closeTx("other_database"). JPA.startTx() has a similar problem: it calls createContext(), which subsequently calls clearContext(). The next issue is with Jobs. Again, assume we have two databases, JPA.DEFAULT and "other_database." Invoking Application.index() for this controller leaks a connection: public class Application extends Controller { public static void index() { SomeJob job = new SomeJob(); Promise p = job.now(); await(p); renderText("Connection leaked!"); } @NoTransaction public static class SomeJob extends Job { @Override public void doJob() throws Exception { Thread.sleep(10000); Logger.info("Job is done"); } } } This leaks a connection because JPAPlugin.afterInvocation() calls JPAPlugin.closeTx(). JPAPlugin.closeTx() is a deprecated method that closes only the JPA.DEFAULT EntityManager. Instead, JPAPlugin.afterInvocation() should loop through all keys in JPA.currentEntityManager and call JPA.closeTx() on each key. JPAPlugin.afterInvocation is called after the await(p) because of javaflow Continuation magic. Alex April 7th, 2015 @ 08:59 AM - State changed from new to inprogress - Tag set to c3p0, jpa - Assigned user set to Alex - Milestone set to 1.3.1 Play Duck April 16th, 2015 @ 10:02 AM (from [a3279269b050cca289101a3f9f06ef4503925cf2]) * [#1933] fix(multidb): fix some methods to call the correct db ** fix some leaks database connections when configured for multiple databases * refactor(JPAModelLoader): move JPAModelLoader class in its own file... Play Duck April 16th, 2015 @ 10:02 AM (from [4f6c3bc80c2678d333bc71da534dde3c3ebacc9c]) Merge pull request #858 from xael-fry/lighthouse-1933-patch [#1933] fix(multidb): fix some methods to call the correct db... Alex April 16th, 2015 @ 10:05 AM - State changed from inprogress to resolved Pulll request #858 merged.
https://play.lighthouseapp.com/projects/57987/tickets/1933-play-130-leaks-database-connections-when-configured-for-multiple-databases
CC-MAIN-2019-30
refinedweb
659
52.76
Project Planner PE 4.5, lets you track and manage your projects in the way you had always wanted. It... This program helps to predict and evaluate the cost of a project. RationalPlan Project Viewer is a free project management software viewer. capture/rank new project ideas, approve & initiate. Bible software for studying and comparing up to three versions of the Scriptures. Seavus Project Viewer enables users to open files from Microsoft Project. import wizard for Microsoft Project files. A time tracking, timesheet, and project management software product. Full version of this award-winning subterranean arcade game. Age of Castles is a game that anyone who likes medieval age will love. Is a tactical first-person shooter developed by Innerloop Studios. Can be used to search the online IGI and update your records. Full Tilt Poker Download to join this ambitious poker room, get $600 bonus! SpeedItup Free is a new breakthrough application that uses 2012 technology. It is a powerful, easy-to-use and absolutely free download accelerator.
http://ptf.com/free/free+full+version+project+igi/
crawl-003
refinedweb
169
69.58
+ Post New Thread Hi, It seems where is an issue when you add a chart within a Windows, Here a snippet of my Test : //All needed imported... import.... GXT 2.0 M1 GWT 1.6 Hosted Mode/IE8/FF3 After any column in a Grid is resized, the Auto-expand capabilities of the grid are lost. Shrinking or... this is the expansion of the topic after synchronizing with svn, validation icon vanished... Window does appear centered, but kicks out an error. Error is: Uncaught Exception: com.google.gwt.core.client.JavaScriptException:... Hi, I have a problem with vanishing validation icon from formPanels. At first everything is ok. When I enter correct data inside the form then the... see When marking a FileUploadField readonly, the text field it uses for the path is marked readonly, but the button can still be clicked. Create the... While i've discovered the Explorer is using the new GXT release, I looked for new examples. Never had seen the widgetrenderergrid before, so i... The constructor of BaseListLoadResult and BasePagingLoadResult should be visible by sub classes. For example: BaseListLoadResult() { } ... Hi folks! This problem (java.lang.IllegalStateException: Should only call onDetach when the widget is attache) has been discussed on Ext-Gwt forum... Hello everybody, Java: 6 GWT: 1.6.4 GXT: 2.0m1 my simply code: public class GXT_TEST implements EntryPoint { Trying to set null in a property on a record throws a NullPointerException. There is no check for null values and therefore the equals method... Compare this two for next test case : &&. Case: -Browser FF... It seems that there is a bug in ComboBox.setForceSelection(true) in the last trunk version when setting this to false, nothing happen, but when... Hi guys, I just stepped into this bug where when I call isValid() method on a MultiField instance I get StackOverflowError. There's no need for... I have a LayoutContainer with a TableLayout. Whenever I add content specifying the width of a cell using TableData (e.g. container.add(someWidget,... in click into the htmlEditor the cursor don't appears.. if you type anything, appers.. in testing with google chrome 1.0.154.65, win vista 64bits.. in and in the items list dont appers Sencha is used by over two million developers. Join the community, wherever you’d like that community to be or Join Us
http://www.sencha.com/forum/forumdisplay.php?46-Ext-GWT-Bugs-(2.x)/page48&order=desc
CC-MAIN-2014-42
refinedweb
392
60.61
Replacing number with another Incrementing # Hello! I’m having difficulties in finding regexes again… Sorry I was a bit confused… I would like to search all these numbers in bold below before the # ‘hash’ sign… 100# A sweet tasting tropical fruit made famous by its use in slapstick comedy and practical jokes. 101# Clustered berries with smooth skin that can be fermented to make wine. 102# An orange root that is supposedly good for your vision. Despite the Beta Carotene, kids don’t care much for it. 103# A tuber that can be fried, baked, boiled mashed, even eaten. I was told that regexe for searching numbers before the # is (?=.+#) but instead of adding numbers at the beginning of it I would like to replace it with a new incrementing numbers… So instead of: 100# 101# 102# 103# It would be like: 40001# 40002# 40003# @raizkie19 said in Replacing number with another Incrementing #: I was told that regexe for searching numbers before the # is (?=.+#) Hi @raizkie19 It’s close but no joy. The quoted S/R regex wouldn’t just insert 40 before the given number, as needed. Instead you will get 401400401#, 401400402#, and so on. But by inserting an anchor ^before the quoted expression the correct replacement is done Find: ^(?=.+#) Replace: 40 However you want to renumber those hashes. Before I can provide you with a solution, I would need to know how many replacements will be made. Best Regards @astrosofista Hi!.. Sorry for the late reply… Oh… thank you for explaining… But the number line will be long… From 40001 to 70000… :( - Ekopalypse last edited by What is Regex: a language for describing a formula to find patterns in a text What is Regex not: A calculator, a painting application, … or a replacement for a programming language. Whenever it is necessary to calculate something based on its results, then a programming language should be used. However, this does not mean that it is not possible with a regex, but creating a regex in such scenarios is usually complex and only works for this particular case. Inspired by Eko’s posted script HERE I have created the following Pythonscript (see below) to help accomplish the goal of this current thread. Here’s how it would be applied in this case (2-phase solution): Part 1 of 2: Run the script (with your data file as the active tab in Notepad++). Specify ^\d{3}(?=#)in the input box that appears. Press OK. At this point all of the items to be incrementally replaced should be selected. Now for part 2 of 2: With the data still multiselected, press Alt+cto invoke the Column editor. Tick Number to insertand specify your starting number in the `` box. Specify your Initial number(e.g. 40001) and Increase bynumber (e.g. 1). Press OK and your text should be transformed as desired. The script: from Npp import editor, notepad class T19343(object): def __init__(self): if editor.selectionIsRectangle(): return if editor.getSelections() == 1: if editor.getSelectionEmpty(): scope_starting_pos = 0 scope_ending_pos = editor.getTextLength() else: scope_starting_pos = editor.getSelectionStart() scope_ending_pos = editor.getSelectionEnd() while True: title = 'SELECT ALL matches in ' title += 'ENTIRE FILE' if editor.getSelectionEmpty() else 'SELECTED TEXT' user_regex_input = notepad.prompt('REGEX to search for: (will result in multiple selections)\r\n(hint: if need to, start regex with \Q to do a "normal" search)', title, '') if user_regex_input == None: return # user Cancel try: # test user_regex_input for validity editor.research(user_regex_input, lambda _: None) except RuntimeError as r: notepad.messageBox('Error in search regex:\r\n{0}\r\n{1}\r\n\r\nYou will be prompted to try again.'.format(user_regex_input, str(r)), '') else: break match_list = [] editor.research(user_regex_input, lambda m: match_list.append(m.span(0)), 0, scope_starting_pos, scope_ending_pos) #print(match_list) if len(match_list) >= 1: (first_match_anchor_pos, first_match_caret_pos) = match_list[0] # set the FIRST selection and bring it into user's view: editor.setSelection(first_match_caret_pos, first_match_anchor_pos) editor.scrollRange(first_match_anchor_pos, first_match_caret_pos) # remember top line of user's view, for later restore first_line = editor.getFirstVisibleLine() if len(match_list) >= 2: editor.setMultipleSelection(True) # in case not enabled in the Preferences # add in all the remaining selections: for (match_anchor_pos, match_caret_pos) in match_list[1 : ]: editor.addSelection(match_caret_pos, match_anchor_pos) editor.setFirstVisibleLine(first_line) elif editor.getSelections() > 1: delimiter = notepad.prompt('Delimit individual copied selections with:\r\n(leave empty to delimit with line-endings)', 'Copy selections to clipboard', '') if delimiter != None: if len(delimiter) == 0: delimiter = '\r\n' accum_list = [] for sel_nbr in range(editor.getSelections()): accum_list.append(editor.getTextRange(editor.getSelectionNStart(sel_nbr), editor.getSelectionNEnd(sel_nbr))) editor.copyText(delimiter.join(accum_list)) #editor.setEmptySelection(editor.getCurrentPos()) notepad.messageBox('Results are now in clipboard', '') if __name__ == '__main__': T19343() I should mention two more things about the script: The scope of the search can be limited by making (one) selection of a stream of text before running the script If the script is run a second time, i.e., while the multiselections from the first run are still active, it provides a mechanism to copy all of the matches (selections) to the clipboard. Thank you for providing the detailed explanation… @raizkie19 said in Replacing number with another Incrementing #: But the number line will be long… From 40001 to 70000… :( Hi @raizkie19, @Ekopalypse, @Alan-Kilborn and All As @Ekopalypse said, regex can’t do math, but there are workarounds to simulate some common operations. So, as an alternative to @Alan-Kilborn’s script and in case you are not allowed to install the Python plugin, let me suggest you the following method, which is based on previous posts. Please try the following: Open a new tab in Notepad++( Ctrl + N) Type in a spaceand then press the Enter key Open the Replacedialog ( Ctrl + H) Set the following fields as follows (Copy/Paste): Find what: \R Replace with: \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n Only select the Wrap Aroundoption and the Regular expression searchmode Click 5 timesthe Replace Allbutton in order to get 32768 (= 2^15) lines with a blank space Close the Replacedialog Ctrl + Hometo go to very beginning of the file. Now, open the Column editor( Edit-> Column Editor…, or Alt + C) Check the Number to Insertoption Type 100in the Initial numberfield Type 1in the Increase byfield Leave the options Repeatand Leading zerosempty The format is decimal, by default Click on the OKbutton Place the caret at the beginning of the fileand then press Endto put it after the blank spacethat follows 100. Again, open the Column editor( Edit-> Column Editor…, or Alt + C) Check the Number to Insertoption Type 40001in the Initial numberfield Type 1in the Increase byfield Leave the options Repeatand Leading zerosempty The format is decimal, by default Click on the OKbutton The replacement list is done. You should have got a list as to this one: 100 40001 101 40002 102 40003 103 40004 104 40005 [...] 32864 72765 32865 72766 32866 72767 32867 72768 32868 72769 Copythe whole list ( Ctrl + A) Open a copyof the file to be changed. Go to the last line. Enter, then type ===( three equal signs) and press Enteragain. Pastethe list to get the following: 30000# A tuber that can be fried, baked, boiled mashed, even eaten. === 100 40001 101 40002 102 40003 [...] Return to the first lineof the file ( Ctrl + Home) Open the Replacedialog ( Ctrl + H) Deselect all options except the Regular expression searchmode Set the following fields as follows (Copy/Paste): Search: (?s)^(\d+)(?=#\R.*?===.*?\1 +(\d+)) Replace: ?1$2 Now, the last action: Click on the Replace Allbutton Close de Replacedialog. That’s all. All the numbers should have been changed. Best Regards. Now THAT is a serious workaround, for the truly desperate. :-) @Alan-Kilborn said in Replacing number with another Incrementing #: Now THAT is a serious workaround, for the truly desperate. :-) Yep, looks as a quite hard task, but actually isn’t. It would be easier to deliver if the macro feature could record Column Editor outputs. Later will try your nice Python script. @Alan-Kilborn said in Replacing number with another Incrementing #: Specify ^\d{3}(?=#) in the input box that appears. Press OK. Tested and worked fine on sample text :) However, I found a potential failure. OP told us that he wanted to replace 30,000 numbers, so if these grow by one, as the sample text suggests, then the regex you provided will fail to match four or five digit numbers. If this is the case, then as you know, expressions like ^\d{3,5}(?=#)or ^\d+(?=#)will match all the numbers. I found a potential failure. True, but also kind of obvious. :-) Plus, I should have used “e.g.” on that part of it, like I did in 2 other places. But yes, the OP may not be versed in regex, and may not know how to specify a solution that covers all his cases, so thanks for the pick-me-up on that. For me, it was more about publishing the script, which I had sitting unfinished in my N++ tabs ever since @Ekopalypse published his original script (which only selected multiple occurrences of static text). @Alan-Kilborn said in Replacing number with another Incrementing #: For me, it was more about publishing the script Rest assured your script is a nice and useful improvement, since it allows users to make more complex selections with greater flexibility. Thank you for sharing it with us :) Hi @raizkie19, @Ekopalypse, @Alan-Kilborn, All I have just realised that, given the regularity of the problem in question, it can also be stated as a mathematical operation, a simple addition. This approach, which perhaps was the one that @ekopalypse had in mind in his post above, gives rise to a third alternative, simpler and more direct than the two posted so far, since although it still requires the Python plugin and a regex, it doesn’t need any list created with the Column Editor. The following script is a very minor adaptation of a code posted by @ekopalypse to solve a similar problem, so all credits belongs to him: from Npp import editor def add_raizkie_number(m): return int(m.group(0)) + 39901 editor.rereplace('\d+(?=#)', add_raizkie_number) So, @raizkie19 hope you can test it and see if it deliver the expected outcome. It worked fin here. Have fun! @astrosofista said in Replacing number with another Incrementing #: very minor adaptation of a code posted by @ekopalypse to solve a similar problem so all credits belongs to him Actually, I think all of the credit goes back to the Pythonscript documentation! Note, though: numbershould be intin the documentation! - Ekopalypse last edited by @Alan-Kilborn @astrosofista More specifically, I have modified the result of calling help(editor.rereplace) in the PythonScript console. >>> help(editor.rereplace) Help on method rereplace: rereplace(...) method of Npp.Editor instance rereplace( (Editor)arg1, (object)searchRegex, (object)replace) -> None : Regular expression search and replace. Replaces 'searchRegex' with 'replace'. ^ and $ by default match the starts and end of the document. Use additional flags (re.MULTILINE) to treat ^ and $ per line. The 'replace' parameter can be a python function, that recieves an object similar to a re.Match object. So you can have a function like def myIncrement(m): return int(m.group(1)) + 1 And call rereplace('([0-9]+)', myIncrement) and it will increment all the integers. - PeterJones last edited by @Alan-Kilborn said in Replacing number with another Incrementing #: Note, though: number should be int in the documentation! That documentation bug was fixed in v1.5.3 in February. v1.5.4 has been released since, with the fix to the getLanguageDesc() historical bug which we finally reported in #146. May it be, mine was just a disclaimer, as I know almost nothing about Python :)
https://community.notepad-plus-plus.org/topic/19345/replacing-number-with-another-incrementing
CC-MAIN-2020-45
refinedweb
1,945
55.03
It's that time again. ;) Question five reads: 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20? 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20? And here are my solutions (which I hope are correct this time). Haskell: module Main where import Data.List divisible_11_to_20 number = 10 == (length $ unfoldr(\x -> if (snd $ quotRem number x) /= 0 || x < 11 then Nothing else Just(x,x - 1)) 20) -- solved this by the help on this URL:-- (http)basildoncoder.com/blog/2008/06/10/project-euler-problem-5/-- by increaing the loop from 2 to 2520, problem solves in secondsmain :: IO ()main = print $ until (divisible_11_to_20) (+2520) 2520 As a quick note, yes I know that I could do this much quicker and cleaner with a list comprehension. I decided to use unfoldr because I wanted the experience of working with it. If it wasn't for this little desire my answer would have looked a lot more like my Python answer. Python: #!/usr/bin/python def div_11_to_20(divided): return all([not bool(divided % x) for x in xrange(11,20+1)]) if __name__ == "__main__": count = 2520 while div_11_to_20(count) == False: count += 2520 print "%s" % count And finally my Perl solution: #!/usr/bin/perl sub divide_11_to_20{ my ( $divided ) = @_; foreach (11..20) { return 0 if ($divided % $_); } return 1;} my $main_count = 2520;while ( !divide_11_to_20($main_count) ){ $main_count += 2520;} print $main_count; Run Times: Haskell: 0m1.302s Python: 0m0.769s Perl: 0m0.223s Observations: It doesn't surprise me that the Perl solution ends up being the fastest on these run times. I say this because per iteration the Perl solutions has less calculations. Both the Haskell and Python solutions perform 10 divisions per iteration, where as the Perl solution only performs 10 divisions when the correct number is is being divided. Its one of those things where the difference is very small, but will become larger as the number of iterations increases. executes in about 2 seconds isDivisible <- function(x) { for(i in 2:20) { if(x%%i==0){next} else return(F); } return(T); } i <- 2520 > while(i < y){if(isDivisible(i)) {print(i); break}; i <- i+2520; if(i%%1000000==0) this tweak makes it subsecond execution ... isDivisible <- function(x) { for(i in 11:20) { if(x%%i==0){next} else return(F); } return(T); } This program can find the lowest number divisible from 1 - n. You can set the limit to n. n can be any number you desire. Source : public class test { public static long LCM(long a,long b) //Function to find LCM { long num1=a,num2=b; while(a!=b) { if(a<b) { a+=num1; } else { b+=num2; } } return a; }public static void main(String args[]){ long num = 1; int limit = 20; //This is the limit,you can change it to whatever you want. int i=2; //Initializing i to 2 System.out.println("This program will find out the least number which is divisible by 1-20."); while(i<limit) //While loop executes till i reaches the limit. { num=LCM(num,i); //Calling LCM function to calculate the LCM of the current'num' and 'i' i++; //Incrementing i } System.out.println("The answer is: "+num); //Prints the final answer. }//End of main}//End of class my python code: def gcd(a, b): if b > a: return gcd(b, a) if b == 0: return a else: return gcd(b, a % b) def lcm(a, b): return b * a / gcd(a, b) s = 1 for i in range(1, 11): s = lcm(i, s) print s I had interestingly attempted the same problem some time back (documented here at : ) My solution though much longer, completes the task in about 2 seconds for a number set of 2 to 1 million #!/usr/bin/python def div_11_to_20(divided): return all(not bool(divided % x) for x in xrange(11,20+1)) if __name__ == "__main__": count = 2520 while div_11_to_20(count) == False: count += 2520 print "%s" % count As you observed, it is not useful to do all the divisions, use a generator expressions instead of a list comprehension. Your 'divide 11 to 20' routine is expressible much more compactly using List::MoreUtils::all: use List::MoreUtils qw( all ); for(my $x = 2520; ; $x += 2520) { print "$x\n" and last if all { $x % $_ == 0 } 11 .. 20;} #!/usr/bin/python3 def gcd(a,b): if b>a: return gcd(b,a) if b == 0: return a return gcd(b, a-b*(a//b)) def lcm(a,b): return abs(a*b)//gcd(a,b) def maxLcm(num): prod = 1 for i in range(1,num+1): prod = lcm(i,prod) return prod print(maxLcm(20)) Fast enough to be instant for input values into the thousands. Here Jeremiah, Thanks for sharing your solution. It looks good, and runs much faster than mine ;). One quick question, if your using "//" for your division, doesn't abs() become unnecessary? This runs in 94 us on my machine from collections import defaultdictimport operatorfrom itertools import starmap primes = [2,3,5,7,11,13,17,19] def factor(n): factors = defaultdict(int) p = 0 while n > 1: if n % primes[p] == 0: n /= primes[p]; factors[primes[p]]+=1 else: p+=1 return factors def gf(n): factors = defaultdict(int) for i in range(n): ifact = factor(i) for p,c in ifact.iteritems(): if factors[p] < c: factors[p] = c return reduce(operator.mul,starmap(pow, factors.items())) if __name__== '__main__': print gf(20) Hey Anon, First time I looked at your code I wondered if it would work. But it does, and its quite fast too. (Although Jeremiah's submission does run faster on my machine. Probably something to do with being run in Python3.) I've never seen Python code like this before, so I think it's going to take me a while to turn it to plain English, well Python English anyway. Thanks for sharing your solution and for expanding my Python knowledge. Mine is actually finding the lcm, just in a weird way. It is finding the minimum list of prime numbers that contains the prime factors of all of the numbers between 1 and 20. If I had realized it at the time, I probably would have done something saner like the other commenters suggested. This runs about an order of magintude faster for me than my previous weird implementation: def gcd(a,b): # Euclid's algorithm while b > 0: a, b = b, a % b return a def lcm(a,b): return a * b / gcd(a,b) print reduce(lcm, range(1,21)) Thanks to Acme::Tools #!/usr/bin/perlsub gcd { my($a,$b,@r)=@_; @r ? gcd($a,gcd($b,@r)): $b==0 ? $a : gcd($b, $a%$b) }sub lcm { my($a,$b,@r)=@_; @r ? lcm($a,lcm($b,@r)) : $a*$b/gcd($a,$b) } print lcm(1..20), "\n";#prints: 232792560 WOW as well niceperl! Thanks for sharing Acme::Tools. I'll have to use it in the future. How about: result = foldl lcm 1 [1 .. 20] You dont need to go from 1 to 20, as everything from 1 to 10 is covered by items 11 to 20. foldr lcm [11..20] (apply lcm (build-list 21 add1)) .4 seconds on an old laptop WOW, that is nice! And thanks for showing me lcm. I didn't know it existed. If you made it this far down into the article, hopefully you liked it enough to share it with your friends. Thanks if you do, I appreciate it.
http://scrollingtext.org/project-euler-problem-5
CC-MAIN-2014-49
refinedweb
1,311
63.09
Modern C++: What You Need to Know - Date: April 3, 2014 from 2:30PM to 3:30PM - Day 2 - Room 2005 - 2-661 - Speakers: Herb Sutter - 122,497 Views - 39. This is a great introduction to the current state of the language, including a glimpse into the future of general purpose, performance-intensive, power-friendly, powerful native have some new material or more like for ppl who dont know modern C++. I ask because I watched all Herbs lectures in past couple of years... (Im your biggest fan :P) Will this only be available 24 - 48 hours afterwards or will it be streamed live? The main schedule doesn't mention it. No streaming? :( No livestream for Modern C++, (talking about IaaS ) :( I'm watching the video now and have been programming C++ for 15+ years, and I found this talk useful, if just a reminder on why we use C++. There's also a performance-geeky section in the middle. Hope you enjoy it! If you're a C++ regular and wondering if you need to watch this, you need to just for the prefetch stuff at about 40min in! Herb, you could also mention Sum Types in your Types table. In C++ we have boost::variant Excelent talk! Every major ISV uses 98C++. Adding Concurrent features into C++ is not part of the philosophy of Imperative Architecture. You're wasting your time learning anything other than 97/98! Java is for most. .NET is for experts. C++ is for end-of-your-life. Get a clue! Adding Javascript, OpenCL Scripts, Querying, Markup all into C++ SYNTAX is awkward. Pros/cons not Plus/minus is correct about C++. The next architecture or Foo has only Plus with no minus. Screw-up your life with SSRIs or misusing C++. Lets' add Concurrency Language syntax into SQL! And next D-HTML-X version 6 with a Opera 4. Way to go. Bad designed code is bad in any language and OOP does not guarantee you anything. Functional programming is not a silver bullet. You might want to drop immutability from time to time to squeeze the last drop of performance out of your hardware. Great talk Herb. Any chance to ask you a question by mail (or different channel), since it is a bit longer and covers some of the aspects you mentioned in the memory related part of your talk? Good talk! Nice to see DOD (Data-Oriented Design; demoed & mentioned in the "Data Locality" article by Bob Nystrom) becoming more widespread! :) For anyone interested in learning about DOD, I've found "The Latency Elephant" and "Pitfalls of Object Oriented Programming" by Tony Albrecht (also mentioned in the article, together with a pretty interesting book by Richard Fabian) pretty good & motivating: - - - (PDF) Mike Acton's presentation from GDC 2014 is quite informative: "Code Clinic: How to Write Code the Compiler Can Actually Optimize" (PPTX) "Introduction to Data-Oriented Design" from DICE also gives a nice overview (see also the links at the end): "A Step Towards Data Orientation" and "Practical Examples in Data Oriented Design" provide a few examples: - - More: Or, last but most definitely not least, here's _THE_ definitive source: "What every programmer should know about memory" by Ulrich Drepper: // If you read just one doc, read this one :) The video buffering takes ages. If you can't host your videos so they are useable why not host them on youtube? Hello everyone, Any know what program they use to make this video? I.E. the speakers' video is on the left with his slides displaying on the right. I'm a teacher and I would like to do something similar. I want to record my lectures with the slides for future viewing. With respect to Bjarne Stroustrup, I would have written the Python arithmetic mean function this way def mean(seq): return sum(seq) / len(seq) Which is faster and shorter than the manual loop. Python example for mean wouldn't pass code review either You can do that C++ as well: auto mean(const Sequence& seq) { return std::accumulate(cbegin(seq), cend(seq), 0.0) / seq.size(); } He probably just did that to illustrate the new for range loop. @Olivier: Thanks, I'll add a slide for next time to show the alternate Python/C++ side-by-side. Your Python example: defmean(seq): return sum(seq) / len(seq) C++ example (you could write sum as a one-liner and shorten this up further but I'll stick to only std:: operations): auto mean(const Sequence& seq) { return accumulate(begin(seq),end(seq),0) / seq.size(); } C++ arithmetic mean could be return std::accumulate(begin(s), end(s)) / s.size(); Ninja'd by the best Thank you very much Herb for your excellent talk on modern C++ and on efficiency/performance topics.I thoroughly enjoyed as always with your talk(especially the slides of access patterns matters) overall about performance. However I have couple of doubt/query from this talk: 1. While explaining about move semantic you have taken the example of collection/container type and mentioned that it would move the resource/pointer ownership to other object and let original in empty state. This is my understanding about move semantics as well. However TC++PL book mention about this we should leave the source object in a valid but unspecified state. The above line(unspecified) confuse a bit. I would like to know what else can be a source object (apart from empty) after move semantics gets called on that particular object.Is there any real world example where keeping object in unspecified state make sense rather than empty??. 2. Is it possible to share your source code which you had used while demonstrating the concept Access Patterns matter(slide no:30). I would like to do some experiment on my laptop with your program(It has L1:3MB) and try to find the spikes(in the total CPU cycle taken) when my data size is below the 3MB and when above the 3MB. 3. I always uses the std::vector as my default choice and this has been demonstrated in your talk as well. I also map/set container where look-up time is really important. I was wondering when using std::list(practical) would be really efficient and beneficial against the std::vector(as even vector beats list in insertion/deletion of node)? Do you have any suggestion on this? Mantosh Kumar Also, don't get fooled by the so called data-driven design. It is a glorified way to optimize loops, such as your typical graphics pipeline that transforms some arbitrary spatial data structure (might be just a flat list) to command buffer packets of some sort. It is far from being a real paradigm. @Olivier, Could you please explain the semantics of sum and len in Python? You are calling sum and len. Does it mean go through the sequence twice? I answer my own question. All the objects with which the Python len function works, keep track of their own length. Therefore, the time complexity of len is O(1)? @George: perhaps these will be useful: - - // note: links to Video & Slides are at the bottom Parallel STL oh yes please... now this type of convergence for C++ is the truly beautiful kind type Guys don't hijack the topic into Google vs Microsoft.. Python vs C++/C#.. C++ is the clear winner here. C# follows. Python is an auxiliary player in Windows and Linux world. BTW sucks to be you python devs.. trying hard out there! F Google. This was brilliant! A blast! I did a very naïve test with C# List<T> and C++ Vector<T> and I was not able to reproduce any performance benefit out of the box form C++ Vector. Thank you talk. On your slide at 16 min you have following sample: Matrix operator+( const Matrix&, const Matrix&); m4 = m1 + m2 + m3; This code can be improved with following change: Matrix operator+(const Matrix& a, const Matrix& b){ auto res = a; res += b; return res; } m4 = m1 + m2 + m3; // matrix get two times copy constructed while: Matrix operator+(Matrix a, const Matrix& b){ a += b; return a; } m4 = m1 + m2 + m3; // matrix get copy constructed only once Note, I can't write: Matrix operator+(Matrix a, const Matrix& b){ return a += b; } I must Matrix operator+(Matrix a, const Matrix& b){ a += b; return a; } or Matrix operator+(Matrix a, const Matrix& b){ return std::move(a += b); } Why? Excellent! I enjoyed it very much. Thank you. It is amazing how little things change my perspective in bigger issues. * View the prefetch buffer as an infinite cache. * "Your intuition is not reliable." It's good to see C++ get more "modern". Getting more modern seems to mean getting closer to C# in ease of use but with better performance. That's all very nice but as a language, despite being "modern", I can't see myself using it unless I need the performance as it still seems so baroque despite all the hand waving. Before using C# I had been using C++ for about 10 years, and way before the "modern" period. At the time I loved using STL and various forms of smart pointers, but it came as relief to leave that all behind and go on to C#. Despite basic type inference to enable the "auto" keyword and smart pointers and lambdas, it still seems that C++ just keeps getting more complex overall, despite some simplifications bought on by the new features. Having used F# for the past 4 years, I now view C++, C# and similar languages are perpetuating a sort of dark ages in programming that may take decades (if ever) to overcome. While C++ and C# have borrowed bits of functional programming, they simply don't foster true high level approaches to problem solving in ways that OCaml, Haskell, F# and similar languages do. Despite the "modern" face lift, compare what's still missing from C++ now with what ML had 40 years ago - immutability by default, algebraic data types, pattern matching, powerful type inference to name some very important features and an elegant intuitive syntax. Modern languages that have derived from ML, like OCaml, F# and Haskell, have ML in their DNA which in practice means they are inherently more capable at tackling complex problems then the Algol derived languages such as C++, C# and Java. So yes, it's nice to have C++ add these new features, but unless one needs the speed why would anyone go through the pain of using a language that's so baroque and complicated. The new features may make it a bit more palatable but unless you're an existing C++ user and really need the speed why bother with C++ when there are much better alternatives.). Great talk as usual! I liked part about memory access patterns. So simple mistake can hurt code performance a lot. Great talk Herb. As a C++ beginner I find your enthusiasm for the language very inspiring, and your talks are always easy to understand and at the same time very deep. Thanks. Excellent! The quality and efficiency C++ reflects in the talk and presentation!! Looking forward for more sessions on C++ or native technologies. Its really refreshing to view these sessions. Remove this comment Remove this threadclose
http://channel9.msdn.com/Events/Build/2014/2-661
CC-MAIN-2014-35
refinedweb
1,900
62.27
Help setting up Marble to work with Qt I am trying to set up Marble to work with Qt 5.5 on OSX. I'm not very experienced with the details of building, linking and such and I think that is causing the problem I am having. Question: Did I screw up the 'marble` install and if so, can someone outline the steps to clean and install correctly. Qt 5.5 is installed in my user directory (using Qt's network installer) on a system running OSX 10.9.5. It works fine. I followed the instructions on the Marble site to clone, build and install from source with (I believe) the appropriate Qt flags. That seemed to go without issue. When I try to build the simple test app listed here, the #include <marble/MarbleWidget.h>line gives a "file not found" error. After the install I've ended up with the following: A "marble" directory in my root user folder In my /usr/loca/ directory there is a "Marble.app" file along with various other marblerelated files in the bin and include directories. However the Marble.app gives the error below on launch,, my "include" doesn't work as noted, and the "libMarbleWidgetPlugin.so" plugin isn't recognized when it's dropped into the plugin directory. Dyld Error Message: Library not loaded: @rpath/QtCore.framework/Versions/5/QtCore Referenced from: /usr/local/Marble.app/Contents/MacOS/marble-qt Reason: image not found Binary Images: 0x7fff6a1f9000 - 0x7fff6a22c817 dyld (239.4) <7AD43B9B-5CEA-3C7E-9836-A06909F9CA56> /usr/lib/dyld Hi, Did you call macdeployqt on Marble.app ? Did you call macdeployqt on Marble.app ? The "Marble.app" is built by cmake – so, no I just used the cmd in the instructions I linked to. I am actually more concerned about just being able to use the marbleWidgetin my own projects – and since my #includestatement fails, I must not have set things up correctly. Sorry, I misunderstood your problem. First thing: did you call make install ? If so you should have everything under the /usr/local/folder Yes I followed the instructions (below) and it all proceeded without error. It appears that lots of marblerelated files (headers, etc) are in various directories under /usr/local/. I don't know how to link to them. mkdir -p ~/marble/build cd ~/marble/build cmake -DCMAKE_BUILD_TYPE=Release -DWITH_KF5=FALSE -DCMAKE_INSTALL_PREFIX=/usr ~/marble/sources make sudo make install - SGaist Lifetime Qt Champion You can either use Qt Creator's Add Library feature or write it by hand in your .pro file. INCLUDEPATH += /usr/local/include LIBS += \ -L/usr/local/lib \ -lmarble I don't know the exact name of the library but it's probable something like that Thanks. I used the "add Library" function and ended up with the following additions to the .pro file. I had to past in the link to the lib since the dialog doesn't allow browsing of "usr/local/". Also, although I tried to link to the lib alias ("libmarblewidget-qt5.dylib"), the actual lib name is used. macx: LIBS += -L$$PWD/../../../../../usr/local/lib/ -lmarblewidget-qt5.0.21.80 INCLUDEPATH += $$PWD/../../../../../usr/local/include DEPENDPATH += $$PWD/../../../../../usr/local/include But it worked! I don't understand what this "$$PWD/../../../../../" business is. <rantish> I wish this install/build stuff wasn't so damned arcane. I've spent endless hours just trying to get things set up – really sucks the joy out of exploring this stuff. </rantish> Update So I cleaned out the "$$PWD" crap and the literal lib name and it works. Not sure what difference it makes – but there you go. macx: LIBS += -L/usr/local/lib/ -lmarblewidget-qt5 INCLUDEPATH +=/usr/local/include DEPENDPATH +=/usr/local/include Good ! By the way, you have now the "Topic Tool" button to mark the thread as solved, no need to change the title anymore :)
https://forum.qt.io/topic/60570/help-setting-up-marble-to-work-with-qt
CC-MAIN-2018-13
refinedweb
644
58.58
User Details - User Since - Jun 9 2014, 4:10 AM (266 w, 13 h) Mon, Jun 17 Dec 12 2018 Since you're using it twice already, is it worth abstracting the "find the appropriate function using dlsym" logic into a function? I imagine that there will be other user-interface functions which will also need this, so abstracting it sooner rather than later seems a good thing to do. Dec 14 2017 Can't we shorten this? I'd prefer Jul 7 2017 In the proposed Windows code, I think it would be useful to stash the value in a static variable, so that we only make the system call once. While it's very unlikely that this is ever really going to be on a performance path, it's easy to do. Something like static inline int getpagesize(void) { static int cachedPageSize = 0; if (cachedPageSize == 0) { SYSTEM_INFO si; GetSystemInfo(&si); cachedPageSize = si.dwPageSize; } return cachedPageSize; } Of course, you could also do this with a static constructor, but the order of static construction gets complicated, and this is guaranteed to be safe. Mar 23 2017 Thanks, LGTM. Setting all flags in both branches seemed to me as better expressing my intents, but I can change it if you insist. I am not going to insist. I just find it odd to have a duplicate line of code, and the fact that it is the same in both if branches made me suspicious that there was bug lurking there. So, personally, I would find code like this clearer. Is "dep_list[i].flags.in = 1;" really supposed to be set in both of the if cases (lines 980 and 983)? If so, can't we lift that out of the if? Jan 12 2017 Good point. My personal preference would be to use "== 0", rather than logical not, since I find that easier to read. So Sep 13 2016 Code looks fine to me, but it'd be really useful also to add a regression test... May 24 2016 (as suggested by Jim) May 5 2016 LGTM, though the test code may need some reworking to handle Windows... May 4 2016 On the microtask stuff: I have no objection to this, but I'm very surprised that it is needed, since I was under the impression that Clang/OpenMP only ever emits an outlined function for the parallel region that takes a single pointer argument, and then generates code that uses offsets from that to find all the actual arguments. I have only made a top-level scan, so this needs more review. However, __kmp_baker_check should look like this Mar 21 2016 LGTM Mar 17 2016 Looks fine. Mar 15 2016 LGTM Mar 11 2016 Looks good to me. Oct 22 2015 Do you expect us to align the license to the licenses used by LLVM/OpenMP or will someone from your team take care of this? Oct 19 2015 If you choose this approach, 32Ki seems too small. A current maxed-out SGI is already at 256 sockets which could get you to 256*18*2 = 9216 threads, so over 1/4 of the way to your 32Ki. Oct 13 2015 I notice that there aren't any patches to kmp_gsupport.c. I looked in that file and didn't see any support for locking. What will happen when using gcc with this support? Will the instrumented locking routines be called to generate the promised callbacks? Sep 28 2015 Can't we roll this up into something which is more table-driven (or loopy) ? (Something more like the code in the testbed runtime, which, admittedly, doesn't parse this yet, but clearly could quite easily. Note that there's a loop here that extracts the sub-components of the string, then another loop to process them, rather than having five blocks of code, much of which is replicated. Sep 24 2015 It generally looks fine to me. Sep 16 2015 LGTM (Ca me parait bon :-) Aug 26 2015 The LLVM OpenMP landing page contains instructions on getting started that tell people to use the Makefile. Therefore that also needs to be updated to reflect this change. Aug 5 2015 Jul 21 2015 As I understand it from Jim, icc does not even need this capability any more (acting more like Clang does now) Jul 9 2015 It looks generally fine, but I am nervous about putting the ompt state placeholders in OpenMP's namespace (by giving them thename prefix "omp_" ). Is there a reason not to name them "ompt_", which would move them into our namespace? Jun 26 2015 Apr 9 2015. Nov 25 2014 From a runtime point of view we need to preserve backwards binary compatibility, so we can't change the interface to the current interface function to introduce a count (because that old code won't set it). Jun 9 2014 Man, I must be in a fuddle this morning. I blame it being Monday. My logic is utterly wrong :-( Ah, sorry, I should keep my nose out of stuff that I don't understand. This looks very dubious lib/CodeGen/CGStmt.cpp : 2076 assert(CD->hasBody() && "missing CapturedDecl body");
https://reviews.llvm.org/p/jcownie/
CC-MAIN-2019-30
refinedweb
864
69.41
Large tri fold designJobs ...[log ind for at se URL] [log ind for at se URL] 4) Remove or defer JavaScript and CSS that interferes with loading above-the-fold content. There are 4 JavaScript files served from. They should be combined into as few files as possible. [log ind for at se URL]... I need some help with internet app marketing to promote my app to a large audience which includes young men and women and big adults.... I want to design Marketing Material for my Tri Folder prints I need skilled English writer and Call to action Marketing Expert the business is Auto Collision center and results please contact us note:experienced persons only wanted looking forward to have A finance spreadsheet has grown to a size of 102 MB and its become real slow and almost impossible to work with. Looking for a permanent makeup trip fold brochure/ pamphlet to give an overview to potential clients. Please reference to [log ind for at se URL] for references. Just need a basic outline we can put in the detailed information. Looking for a clean design. Preferable colors Black, White and gold. Thank you! import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks ..
https://www.dk.freelancer.com/work/large-tri-fold-design/
CC-MAIN-2018-43
refinedweb
234
73.78
Understanding JavaScript Modules: Bundling & TranspilingBy Mark Brown This article was peer reviewed by Dan Prince and Ravi Kiran. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! Most folks consider modules, dependency management and dynamic loading a basic requirement for any modern programming language—these are some of the most important features added to JavaScript in 2015. More from this author Modules are used extensively in Node but our focus here will be on how we can use modules inside of the browser. We’ll explore a little history, navigate through the hazardous current landscape with the end goal of having a clear path forward and appreciation for the most important module bundlers for JavaScript today: Browserify, Webpack and jspm. Finally we’ll look at how to use these tools with transpilers like CoffeeScript, TypeScript and Babel. Modules through the Ages JavaScript has existed since 1995 and to this day no browser supports modules natively. Node and CommonJS were created in 2009 and the vast majority of packages in npm use CommonJS modules. Browserify was released in 2011 and brought CommonJS modules to the browser allowing client-side JavaScript to require npm packages. The tool bundles up all of the required dependencies into a single JavaScript file. The Past A library such as jQuery adds $ to the global scope or window. window.$ = function() { ... }; We include a script to a library and use the global objects it exposes. <script src="jquery.js"></script> <script> $(function() { ... }); </script> Your own application code was typically namespaced under a global like App to prevent polluting the global scope. Without this it’s only so long before you have name collisions and things fall apart. var App = {}; App.Models = {}; App.Models.Note = function() {}; The Future Libraries export objects in a common module format (ES6 modules). export default function $() { ... } We import a module into a local scope and use it. import $ from 'jquery'; $(function() { ... }); - No globals required 👍 - Source order independence - Access to npm - No need to namespace your own application code - Dynamically load modules at any time as required The Present It’s really really complicated. First, there’s a variety of module formats out there in use: Tools for bundling assets come in a variety of shapes and sizes: Then there’s transpilers that you may want to use: - Babel for ES6 - CoffeeScript - Typescript In addition, there are various libraries that allow dynamic loading of modules: These are shortened lists of popular tools currently in use—it’s a minefield for beginners and experts alike. The cost of transpiling also highlights that you can mix and match a lot of these tools and get different results. Let’s Consolidate Tooling in 2016 Front-end developers have been using build tools for a very long time but it’s only in the last few years that we’ve seen a build step become the norm. Tools like Sass and CoffeeScript helped make pre-processing mainstream but the momentum around ES6 has now got everyone on board. JavaScript community made some great improvements in 2015, but we need to consolidate tooling in 2016.— Nicolás Bevacqua (@nzgb) January 8, 2016 I agree. Gulp and Grunt have been very popular in the past few years, these tools allow you to write a series of transforms to pipe your assets through. They’ve been used to great effect and are still popular, though a lot of people choose to use the tools directly through npm – see Why I left Gulp and Grunt for npm Scripts and Guide to using npm as a Build Tool. Personally, I don’t care for building asset pipelines any longer, what I’m looking for is minimal config tools that allow me to use modern tooling as needed: Things like Sass, Autoprefixer, Babel and Coffeescript, a proper module system and loader without having to worry about the implementation, configuration and ongoing maintenance. In essence, every developer has been investing time into creating asset pipelines over the last few years, that’s a lot of wheel re-invention going on and a lot of wasted hours. The community is divided across tools like Browserify, Webpack, jspm, Sprockets and Gulp. That’s not really a problem, it’s just confusing for everyone trying to understand a clear path forward. Clear Starting Points There’s a few things we can agree on: - ES2015 modules are the one true future module format for JavaScript. - Babel is the ES2015 compiler of choice today. - Native loaders are still a while away from being available in browsers, a report on the Future of JavaScript by Telerik suggests that complete ES2015 support could take over two years given the module loading hurdle. - If you want to use modules now, that will very likely involve CommonJS at some point. Let’s look at what minimal configuration setups look like using Browserify, Webpack and jspm, these are the most important JavaScript bundlers to know about today. A New Project mkdir modules-app cd modules-app npm init -y npm install --save-dev browserify webpack jspm mkdir src touch src/{entry,lib}.js index.html Update index.html in your favourite text editor <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Modules!</title> </head> <body> <script src="bundle.js"></script> </body> </html> We’ll also need a server to run the code—for example live-server which is a great little zero-config HTTP server with live reload capability. Install it globally with npm install -g live-server and run live-server from the project root to start. Browserify Browserify lets you require('modules') in the browser by bundling up all of your dependencies. Open up src/lib.js and add our very first module. var double = function(number) { return number * 2; } module.exports = { double: double } Open up src/entry.js and we’ll require our module and use it. var lib = require('./lib.js'); console.log(lib.double(2)); Update the scripts section in package.json "scripts": { "browserify": "browserify ./src/entry.js -o ./bundle.js" }, Run this script with npm run browserify Browserify will create bundle.js in the project root and you should see a most exiting 4 output to the console. To learn more about what Browserify is doing and how this bundle gets created I recommend watching Introduction to Browserify at egghead.io Congratulations! We now have modules in the browser! 🎉 Another key benefit of Browserify is that it gives you access not only to modules that you author, but to npm modules as well, let’s install lodash to see. npm install lodash --save-dev Edit src/lib.js var sum = require('lodash/sum'); var double = function(number) { return number * 2; } var addFive = function(number) { return sum([number, 5]); } module.exports = { double: double, addFive: addFive } Edit src/entry.js and call our new addFive function var lib = require('./lib.js'); console.log(lib.double(2)); console.log(lib.addFive(2)); Create the bundle again with npm run browserify and in the browser you should see a 4 and a 7 which shows that we’ve successfully imported and used lodash’s sum function. If you’ve followed along this far you now know everything you need to get started using modules in the browser today, this brings many benefits that we outlined at the start. - No globals required 👍 - Source order independence - Access to npm - No need for namespacing your own application code We’ll look at dynamic loading of modules at runtime later. Webpack Webpack is a module bundler. Webpack takes modules with dependencies and generates static assets representing those modules. Let’s add a new script to package.json for calling webpack "webpack": "webpack ./src/entry.js bundle.js" Run it with npm run webpack Webpack will have rewritten bundle.js and the output in the browser should be exactly the same. Try running npm run browserify and npm run webpack and examining the differences in the compiled bundle.js file. It’s not really important to understand how these tools work internally, the important thing to note is that whilst the implementations are different they are essentially doing the same task of compiling the same code with CommonJS modules into standard browser-friendly JavaScript. Each module is put inside a function within bundle.js and assigned an ID so that it can be loaded as required. There’s far more to Webpack than this though! It truly is the swiss army knife of module bundlers. Webpack also comes with great tools for development out of the box, things like hot module replacement which will automatically reload individual modules in the browser as they are changed—similar to LiveReload but without the page refresh. There is a growing list of loaders for different asset types too, even CSS with the css-loader and style-loader—loaders which can compile CSS into the JavaScript bundle and inject it into the page at runtime. This is outside of the scope of this article but more can be found on this at getting started with Webpack. JavaScript Transpilers These are three of the most popular transpilers used today, you may also want to use another from the very long list of languages that compile to JS. Before looking at how we can use them with our module bundlers let’s look at how to use the tools directly first. npm install --save-dev coffee-script typescript babel-cli babel-preset-es2015 touch src/{coffee-lib.coffee,ts-lib.ts,es6-lib.js} CoffeeScript Edit coffee-lib.coffee sum = require 'lodash/sum' double = (number)-> number * 2 addFive = (number)-> sum([number, 5]) module.exports = double: double addFive: addFive Note: CoffeeScript uses the CommonJS syntax for modules Add a script to package.json to run the coffee executable "coffee": "coffee --output ./dist ./src/coffee-lib.coffee" Run it with npm run coffee TypeScript Edit ts-lib.ts /// <reference path="lodash.d.ts" /> import * as _ from 'lodash'; const double = (value: number)=> value * 2 const addFive = (value: number)=> _.sum([value, 5]) export = { double, addFive } Note: TypeScript has its own syntax for modules that look like a mix of ES2015 module syntax and CommonJS. Add a script to package.json to run the tsc executable "tsc": "tsc --outDir ./dist ./src/ts-lib.ts" Run it with npm run tsc The compiler will complain about not being able to find lodash as it requires a type definition to know how to work with external modules that aren’t TypeScript files. You can fetch a definition file with: cd src curl -O cd .. npm run tsc Babel Edit es6-lib.js import sum from 'lodash/sum'; const double = (number)=> number * 2 const addFive = (number)=> sum([number, 5]) export { double, addFive } Note: Babel understands the lovely new ES2015 module syntax. Babel requires a config file for specifying which presets to use echo '{ "presets": ["es2015"] }' > .babelrc Add a script to package.json to run the babel cli "babel": "babel ./src/es6-lib.js -o ./dist/es6-lib.js" Run it with npm run babel The files in /dist now contain ES5 code in CommonJS module format that will work perfectly with Browserify or Webpack as we used previously. You can either transpile down to ES5 with CommonJS first and then bundle, or you can use other packages to do both in a single step. For Browserify there are plugins coffeeify, tsify and babelify to transpile and bundle. For Webpack there are loaders coffee-loader, ts-loader and babel-loader to require modules in different languages. jspm jspm is a package manager for the SystemJS universal module loader, built on top of the dynamic ES6 module loader jspm takes a different approach and starts with the module loader System.js. System.js is a project that will follow the loader spec as it’s developed. Install and initialize a jspm project npm install -g jspm jspm init Accept all of the defaults and ensure that Babel is used as the transpiler, that will configure System.js to use Babel when it runs into ES6 style modules. Update index.html to load and configure System.js <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Modules!</title> </head> <body> <script src="jspm_packages/system.js"></script> <script src="config.js"></script> <!--<script src="bundle.js"></script>--> <script> System.import('src/entry.js'); </script> </body> </html> In the browser you’ll see a few requests being made and a 404 for lodash, this is because jspm loads packages from the jspm_packages directory by default. Run jspm install lodash to install lodash in that directory and you should see the expected output in the console, a 4 and a 7, here’s what’s happening: - Our entry.jsfile is being dynamically loaded with System.import('src/entry.js');. - System.js loads entry.js, sees that it requires our libmodule so fetches it at runtime. - System.js loads lib.js, sees that it requires lodash/sumand fetches it too. System.js also knows how to work directly with ES6, update entry.js to dynamically require our ES6 module and compile on the fly. import lib from './es6-lib'; // import lib from '../dist/coffee-lib'; // import lib from '../dist/ts-lib'; console.log(lib.double(2)); console.log(lib.addFive(2)); You can also try loading the ES5 compiled versions of our CoffeeScript or TypeScript modules by uncommenting those lines one at a time. Another option is to use System.js plugins to transpile the code, rather than requiring precompiled ES5 code. Add a final script to package.json for creating a bundle with jspm "jspm": "jspm bundle src/entry bundle.js" Run it with npm run jspm Finally uncomment the script tag for bundle.js in index.html and the browser should load a production-ready bundle without any extra http requests. <script src="bundle.js"></script> Revisiting Webpack Our Webpack example earlier was the simplest example using the default options, it compiled entry.js with CommonJS modules down into a single bundle. When doing more fancy things with Webpack you’ll want to create a custom config file for all of the loader configuration. Create webpack.config.js in the root of the project module.exports = { context: __dirname + "/src", entry: "./entry", output: { path: __dirname, filename: "bundle.js" }, module: { loaders: [{ test: /\.js$/, loader: 'babel-loader', query: { presets: ['es2015'] } },{ test: /\.coffee$/, loader: 'coffee-loader' },{ test: /\.ts$/, loader: 'ts-loader' }] } } Update index.html to load only the bundled file again. <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Modules!</title> </head> <body> <script src="bundle.js"></script> </body> </html> Install the loaders for transpiling with Babel, CoffeeScript and TypeScript npm install --save-dev babel-loader coffee-loader ts-loader Install webpack globally and run without arguments to create the bundle from our config file. npm install -g webpack webpack Now that webpack knows to use these loaders for these file extensions we’re free to use ES6, CoffeeScript or TypeScript from entry.js, give it a try by uncommenting these one by one. import lib from './es6-lib.js'; // import lib from './coffee-lib.coffee'; // import lib from './ts-lib.ts'; There is so much more to Webpack than I’ve covered here, but these simple setups are a great starting point. There and Back Again And so we end our exploration of modules, they do solve a lot of problems and can really reduce the complexity of our applications—if the tooling doesn’t get in our way. If you’re not already using modules, now is the time. No need to spend unnecessary hours building asset pipelines, instead use simple tools that Just Work™. Webpack is the current juggernaut and you’ll be able to configure it to do almost anything. jspm is a great tool for all your bundling needs and works with a variety of formats and has a nice developer experience. Browserify is still a solid option, the grandfather of modern module bundlers—it’s ecosystem has grown to include some of Webpack’s much loved features (such as bundle splitting and hot reloading). Finally System.js is perfect for when you need to be able to load extra modules at runtime. You won’t want to use all of the above tools in one project but it’s important to have an understanding of these three popular options, as well as how you can use transpilers when the need arises. If you just want to use modules, then Browserify, jspm or Webpack with the default options will do the job. Keep the tooling simple and the configuration light. Happy Bundling. - Namo - yoshi - Slava Ganzin - Miron Machnicki - markbrown4 - Abbe - markbrown4 - Abbe - markbrown4 - Abbe - Josh Caplin - Priyank Mandavawala
https://www.sitepoint.com/javascript-modules-bundling-transpiling/
CC-MAIN-2017-13
refinedweb
2,788
64.41
sendfile - transfer data between file descriptors #include <sys/sendfile.h> ssize_t sendfile(int out_fd, int in_fd, off_t *offset, size_t count)(2) will start reading data. When sendfile(2) returns, this variable will be set to the offset of the byte following the last byte that was read. count is the number of bytes to copy between file descriptors. Because this copying is done within the kernel, sendfile(2) does not need to spend time transferring data to and from user space.(2)-like operations. If the transfer was successful, the number of bytes written to out_fd is returned. On error, -1 is returned, and errno is set appropriately. sendfile(2) is a new feature in Linux 2.2. The include file <sys/sendfile.h> is present since glibc 2.1. Other Unixes often implement sendfile(2) with different semantics and prototypes. It should not be used in portable programs. 6 pages link to sendfile(2): lib/main.php:944: Notice: PageInfo: Cannot find action page
http://wiki.wlug.org.nz/sendfile(2)?action=PageInfo
CC-MAIN-2015-18
refinedweb
165
60.82
I have seen this in several places and also in the GUI. What exactly is a UDF? I will be happy to explain a UDF. But first, I have a question, have you taken a look at our Help Online section? Also are you looking to solve a specific problem? If so, what problem would you like to solve? Thank you for your speedy response! Indeed I have read the Help, and I found the explanation short. I need to know if this is a python function and does it work exactly like a python function. That's a very good question. Conceptually UDFs or User Defined Functions is user code that executes in a separate VM and today we support Python VM for this. The basic concept here is that users do not need to worry about how Xcalar works on large distributed problems, supporting infrastructure, corresponding operationalization issues. All this is abstracted from the user, and the user simply focuses on how to process a tiny slice or datapoint(s) as input and return a response. There are really 3 kinds of UDFs. Import UDF: Used to import data from a datasource of your choice. You pick the source, you pick the format, you pick the size of the data. You write the UDF. The input is the name of the file path and the data stream. You use a yield statement to return records. One call to yield creates one dict object and returns this as a record. import io def fetchRecord(ins): facility = next(ins).strip() #print "facility: {}".format(facility) street = next(ins).strip() #print "street: {}".format(street) row = next(ins).strip() #print "row: {}".format(row) suite = None if 'Suite' in row or 'Hwy' in row or 'Ste' in row or 'Floor'.lower() in row.lower() or 'Unit' in row or 'Box' in row or "Building" in row or '100 B' in row or '100 A' in row: suite = row #print "suite: {}".format(suite) row = next(ins).strip() row = row.split(',') print("row = {}".format(row)) city = row[0] stateZip = row[1] row = stateZip.split(' ') state = row[0] zipcode = row[1] phones = next(ins).split(':') phone = phones[1].strip() next(ins) next(ins) return facility,street,suite,city,state,zipcode,phone def parse(path, ins): startRec = True for line in ins: if len(line) == 2: facility,street,suite,city,state,zipcode,phone = fetchRecord(ins) yield {'facility' : facility, 'street' : street, 'suite' : suite, 'city' : city, 'state' : state, 'zipcode' : zipcode, 'phone' : phone} In the above example, the parse function is the entry point to the UDF. Notice it takes two parameters. The first parameter is path which we do not need in this code. The second parameter is the ins stream. We read a line at a time from this stream and parse it using fetchRecord. The details of this parsing are very specific to the file we are parsing. In this case, the file happens to be unstructured data. However, you only need to worry about how to solve this problem. If you had a million or a billion files, Xcalar will handle the complexity of MPP (Massively parallel processing) based import for you, and you only need to make sure you point to a dataset containing such files and import using the GUI and specifying this UDF. Map UDF: The Map UDF typically performs an operation on a column and returns a python string. Note it always returns a string. So you will need to convert it to a different data type if you need a different data type. Each invocation of your Map UDF will operate on one row. So if you had 1 billion rows, your function will be called 1 billion times totally on the Xcalar cluster. This allows for MPP style load management and makes it very easy for you to focus on functionality, while Xcalar focus on parallelizing your Map operation. scoreHash = {"breathlessness" : 3, 'wheezing' : 2, 'headache' : 2, 'swelling' : 2, 'insomnia' : 1, 'indigestion' : 1, 'nasal congestion' : 1, 'changes in skin color' : 2, 'itching' : 2, 'skin rash' : 2, 'hives' : 2, 'stomach pain' : 2, 'coughing' : 1, 'fever' : 2, 'watery eyes' : 1, 'runny nose' : 1, 'sore throat' : 1, 'fatigue' : 1, 'blood pressure' : 2, 'weakness' : 2, 'dizziness' : 3, 'abdominal cramps' : 2, 'nausea' : 2} def patientScore(ins): pScore = 0 if len(ins) == 0: return pScore symptoms = ins.split('|') for symptom in symptoms: pScore += scoreHash[symptom] return pScore Export UDF: This allows you to write a user based export target. You can export a table to any data sink. This could include files or even our datastores such as Kafka, Cassandra etc. This gives me a very clear picture of UDFs. Thanks for the real code examples. Python is my pet language. Can't wait to take advantage of Xcalar's powerful scaling infra! Thanks for your help Manoj!
https://discourse.xcalar.com/t/white-check-mark-what-is-a-udf-can-you-please-give-an-example/326
CC-MAIN-2020-40
refinedweb
800
75.3
Why and how specifically is a Scala Future not a Monad; and would someone please compare it to something that is a Monad, like an Option? The reason I'm asking is Daniel Westheide's The Neophyte's Guide to Scala Part 8: Welcome to the Future where I asked whether or not a Scala Future was a Monad, and the author responded that it wasn't, which threw off base. I came here to ask for a clarification. Futures can be considered monads if you never construct them with effectful blocks (pure, in-memory computation), or if any effects generated are not considered as part of semantic equivalence (like logging messages). However, this isn't how most people use them in practice. For most people using effectful Futures (which includes most uses of Akka and various web frameworks), they simply aren't monads. Fortunately, a library called Scalaz provides an abstraction called Task that doesn't have any problems with or without effects. Let's review briefly what a monad is. A monad must be able to define at least these two functions: def unit[A](block: => A) : Future[A] def bind[A, B](fa: Future[A])(f: A => Future[B]) : Future[B] And these functions must statisfy three laws: bind(unit(a))(f) ≡ f(a) bind(m) { unit(_) } ≡ m bind(bind(m)(f))(g) ≡ bind(m) { x => bind(f(x))(g) } These laws must hold for all possible values by definition of a monad. If they don't, then we simply don't have a monad. There are other ways to define a monad that are more or less the same. This one is popular. Almost every usage of Future that I've seen uses it for asychronous effects, input/output with an external system like a web service or a database. When we do this, a Future isn't even a value, and mathematical terms like monads only describe values. This problem arises because Futures execute immediately upon data contruction. This messes up the ability to substitute expressions with their evaluated values (which some people call "referential transparency"). This is one way to understand why Scala's Futures are inadequate for functional programming with effects. Here's an illustration of the problem. If we have two effects: import scala.concurrent.Future import scala.concurrent.ExecutionContext.Implicits._ def twoEffects = ( Future { println("hello") }, Future { println("hello") } ) we will have two printings of "hello" upon calling twoEffects: scala> twoEffects hello hello scala> twoEffects hello hello But if Futures were values, we should be able to factor out the common expression: lazy val anEffect = Future { println("hello") } def twoEffects = (anEffect, anEffect) But this doesn't give us the same effect: scala> twoEffects hello scala> twoEffects The first call to twoEffects runs the effect and caches the result, so the effect isn't run the second time we call twoEffects. With Futures, we end up having to think about the evaluation policy of the language. For instance, in the example above, the fact I use a lazy value rather than a strict one makes a difference in the operational semantics. This is exactly the kind of twisted reasoning functional programming is designed to avoid -- and it does it by programming with values. In the presense of effects, monad laws break. Superficially, the laws appear to hold for simple cases, but the moment we begin to substitute expressions with their evaluated values, we end up with the same problems we illustrated above. We simply can't talk about mathematical concepts like monads when we don't have values in the first place. To put it bluntly, if you use effects with your Futures, saying they're monads is not even wrong because they aren't even values. To see how monad laws break, just factor out your effectful Future: import scala.concurrent.Future import scala.concurrent.ExecutionContext.Implicits._ def unit[A] (block: => A) : Future[A] = Future(block) def bind[A, B] (fa: Future[A]) (f: A => Future[B]) : Future[B] = fa flatMap f lazy val effect = Future { println("hello") } Again, it will only run one time, but you need it to run twice -- once for the right-hand side of the law, and another for the left. I'll illustrate the problem for the right identity law: scala> effect // RHS has effect hello scala> bind(effect) { unit(_) } // LHS doesn't Without putting an ExecutionContext in implicit scope, we can't define either unit or bind in our monad. This is because the Scala API for Futures has these signature: object Future { // what we need to define unit def apply[T] (body: ⇒ T) (implicit executor: ExecutionContext) : Future[T] } trait Future { // what we need to define bind flatMap[S] (f: T ⇒ Future[S]) (implicit executor: ExecutionContext) : Future[S] } As a "convenience" to the user, the standard library encourages users to define an execution context in implicit scope, but I think this is a huge hole in the API that just leads to defects. One scope of the computation may have one execution context defined while another scope can have another context defined. Perhaps you can ignore the problem if you define an instance of unit and bind that pins both operations to a single context and use this instance consistently. But this is not what people do most of the time. Most of the time, people use Futures with for-yield comprehensions that become map and flatMap calls. To make for-yield comprehensions work, an execution context must be defined at some non-global implicit scope (because for-yield doesn't provide a way to specify additional parameters to the map and flatMap calls). To be clear, Scala lets you use lots of things with for-yield comprehensions that aren't actually monads, so don't believe that you have a monad just because it works with for-yield syntax. There's a nice library for Scala called Scalaz that has an abstraction called scalaz.concurrent.Task. This abstraction doesn't run effects upon data construction as the standard library Future does. Furthermore, Task actually is a monad. We compose Task monadically (we can use for-yield comprehensions if we like), and no effects run while we're composing. We have our final program when we have composed a single expression evaluating to Task[Unit]. This ends up being our equivalent of a "main" function, and we can finally run it. Here's an example illustrating how we can substitute Task expressions with their respective evaluated values: import scalaz.concurrent.Task import scalaz.IList import scalaz.syntax.traverse._ def twoEffects = IList( Task delay { println("hello") }, Task delay { println("hello") }).sequence_ We will have two printings of "hello" upon calling twoEffects: scala> twoEffects.run hello hello And if we factor out the common effect, lazy val anEffect = Task delay { println("hello") } def twoEffects = IList(anEffect, anEffect).sequence_ we get what we'd expect: scala> twoEffects.run hello hello In fact, it doesn't really matter that whether we use a lazy value or a strict value with Task; we get hello printed out twice either way. If you want to functionally program, consider using Task everywhere you may use Futures. If an API forces Futures upon you, you can convert the Future to a Task: import concurrent. { ExecutionContext, Future, Promise } import util.Try import scalaz.\/ import scalaz.concurrent.Task def fromScalaDeferred[A] (future: => Future[A]) (ec: ExecutionContext) : Task[A] = Task .delay { unsafeFromScala(future)(ec) } .flatMap(identity) def unsafeToScala[A] (task: Task[A]) : Future[A] = { val p = Promise[A] task.runAsync { res => res.fold(p failure _, p success _) } p.future } private def unsafeFromScala[A] (future: Future[A]) (ec: ExecutionContext) : Task[A] = Task.async( handlerConversion .andThen { future.onComplete(_)(ec) }) private def handlerConversion[A] : ((Throwable \/ A) => Unit) => Try[A] => Unit = callback => { t: Try[A] => \/ fromTryCatch t.get } .andThen(callback) The "unsafe" functions run the Task, exposing any internal effects as side-effects. So try not to call any of these "unsafe" functions until you've composed one giant Task for your entire program.
https://codedump.io/share/0LEA1SM79wbZ/1/is-future-in-scala-a-monad
CC-MAIN-2018-13
refinedweb
1,342
52.7
Build a collaborative note app using Laravel A basic understanding of Laravel and Vue.js is needed to follow this tutorial. In this tutorial, we’ll build an online collaborative note app using Laravel and Pusher. We’ll be using Vue.js as our JavaScript framework. The app is going to be basic but will demonstrate the necessary features of a collaborative application since that’s the focus of this tutorial. What we’ll be building Before we get our hands busy, let’s go over what we’ll be building. The app will be a simple note taking app that is accessible only to authenticated users. With the app, a user can create new note, edit the note and/or share the link to the note to other users for editing. In the case of editing a note, the app will be able to keep track of the users editing a particular note, show other users realtime edits that are being made on the note and lastly notify the other users when a user saves the note. Let’s get started! Setting up Laravel Create a new Laravel project by opening your terminal and run the code below: laravel new laravel-notes Setting up Pusher If you don’t have one already, create a free Pusher account here then log in to your dashboard and create an app. Take note of your app credentials as we’ll be using them shortly. For the purpose of this tutorial, we’ll be triggering some client events in our online collaborative note app. By default, when you create a Pusher app, client events are not enabled. We have to enable this for our app. To enable client events in your Pusher app, select the app then click on the App Settings tab, then check the box next to Enable client events. Now, let’s fill in our Pusher app credentials.: xxxxxxxxxxxxxxxxxxxx, }); Remember to replace the xs with your Pusher app key. Authenticating users As mentioned earlier, our collaborative note app will be only accessible to authenticated users. So, we need an authentication system: set up our database. Open the .env file and enter your database details: // .env DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=laravel-notes DB_USERNAME=root DB_PASSWORD= Update with your own database details. Now, we can run our migration: php artisan migrate NOTE: There’s a bug in Laravel 5.4 if you’re running a version of MySQL older than 5.7.7 or MariaDB older than 10.2.2. This can be fixed by replacing the boot() of app/Providers/AppServiceProvider.php with: // app/Providers/AppServiceProvider.php // remember to use Illuminate\Support\Facades\Schema; /** * Bootstrap any application services. * * @return void */ public function boot() { Schema::defaultStringLength(191); } Note model and migration Create a Note model along with the migration file by running the command: php artisan make:model Note -m Open the Note model and add the code below to it: /** * Fields that can not be mass assigned * * @var array */ protected $guarded = ['id']; /** * Get the route key for the model. * * @return string */ public function getRouteKeyName() { return 'slug'; } Instead of manually specifying each field that can be mass assigned in the $fillable array, we simply use $guarded and add the id column as the field that can not be mass assigned, meaning every other field can be mass assigned. Laravel route model bind will by default use the id column on the model, but in this tutorial, we want to use the slug column instead, hence the getRouteKeyName method which will simply return the column we want to use for route model binding. Within the databases/migrations directory, open the notes table migration that was created when we ran the command above and update the up method with: Schema::create('notes', function (Blueprint $table) { $table->increments('id'); $table->unsignedInteger('user_id'); $table->string('title'); $table->string('slug')->unique(); $table->text('body'); $table->timestamps(); }); Run the migration: php artisan migrate Defining app routes Open routes/web.php and replace the routes with the code below: Auth::routes(); Route::get('/', 'NotesController@index'); Route::get('create', 'NotesController@create'); Route::post('create', 'NotesController@store'); Route::get('edit/{note}', 'NotesController@edit'); Route::patch('edit/{note}', 'NotesController@update'); The routes are straightforward: routes that will handle authentication, a homepage route to list all notes created a user, routes for creating a new note and lastly routes to update a specified note. NOTE: Since we have removed the /home route, you might want to update the redirectTo property of both app/Http/Controllers/Auth/LoginController.php and app/Http/Controllers/Auth/RegisterController.php to: protected $redirectTo = '/'; NotesController Let’s create the controller which will handle the logic of our chat app. Create a NotesController with the command below: php artisan make:controller NotesController Open the new create app/Http/Controllers/NotesController.php file and add the following code to it: // app/Http/Controllers/NotesController.php use App\Note; public function __construct() { $this->middleware('auth'); } /** * Display a listing of all notes. * * @return \Illuminate\Http\Response */ public function index() { $notes = Note::where('user_id', auth()->user()->id) ->orderBy('updated_at', 'DESC') ->get(); return view('notes.index', compact('notes')); } /** * Show the form for creating a new note. * * @return \Illuminate\Http\Response */ public function create() { return view('notes.create'); } /** * Store a newly created note in database. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(Request $request) { $this->validate($request, [ 'title' => 'required', 'body' => 'required' ]); $note = Note::create([ 'user_id' => $request->user()->id, 'title' => $request->title, 'slug' => str_slug($request->title) . str_random(10), 'body' => $request->body ]); return redirect('/'); } /** * Show the form for editing the specified note. * * @param \App\Note $note * @return \Illuminate\Http\Response */ public function edit(Note $note) { return view('notes.edit', compact('note')); } /** * Update the specified note. * * @param \Illuminate\Http\Request $request * @param \App\Note $note * @return \Illuminate\Http\Response */ public function update(Request $request, Note $note) { $note->title = $request->title; $note->body = $request->body; $note->save(); return 'Saved!'; } Using the auth middleware in NotesController‘s __contruct() indicates that all the methods with the controller will only be accessible to authenticated users. The index method will fetch the notes created by the currently authenticated user and render a view with notes. The create method will display a form to create new note. The store method will do the actual persisting of the note to the database. Notice we’re appending a random string to the slug so as to make it unique for each note. The edit method shows the form for editing a specified note. Lastly, the update method handles the actual update and persist to database. Creating Our Note App Views When we ran make:auth, Laravel created a master layout called app.blade.php which we are going to leverage with some slight additions. So open resources/view/layouts/app.blade.php and update the left side of the navbar with: <!-- resources/view/layouts/app.blade.php --> <!-- Left Side Of Navbar --> <ul class="nav navbar-nav"> <li><a href="{{ url('create') }}">Create Note</a></li> </ul> All we did is add a link to create new note on the navbar. Create new note view Now, let’s create the view for creating a new note. Create a new directory named notes within the views directory. Within the newly created notes directory, create a new file named create.blade.php and paste the code below to it: <!-- resources/views/notes/create.blade.php --> @extends('layouts.app') @section('content') <div class="container"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <div class="panel panel-default"> <div class="panel-heading">Create new note</div> <div class="panel-body"> <form action="{{ url('create') }}" method="POST" class="form" role="form"> {{ csrf_field() }} <div class="form-group{{ $errors->has('title') ? ' has-error' : '' }}"> <input type="text" class="form-control" name="title" value="{{ old('title') }}" placeholder="Give your note a title" required autofocus> @if ($errors->has('title')) <span class="help-block"> <strong>{{ $errors->first('title') }}</strong> </span> @endif </div> <div class="form-group{{ $errors->has('body') ? ' has-error' : '' }}"> <textarea class="form-control" name="body" rows="15" placeholder="...and here goes your note body" required>{{ old('body') }}</textarea> @if ($errors->has('body')) <span class="help-block"> <strong>{{ $errors->first('body') }}</strong> </span> @endif </div> <button class="btn btn-primary pull-right">Save</button> </form> </div> </div> </div> </div> </div> @endsection This creates a form with two input fields (for title and body of the note respectively) and a save button. List all notes view Let’s give our users a way to see all the notes they have created. Within the notes directory, create a new file named index.blade.php and paste the code below into it: <!-- resources/views/notes/index.blade.php --> @extends('layouts.app') @section('content') <div class="container"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <div class="panel panel-default"> <div class="panel-heading">My notes</div> <div class="panel-body"> @if($notes->isEmpty()) <p> You have not created any notes! <a href="{{ url('create') }}">Create one</a> now. </p> @else <ul class="list-group"> @foreach($notes as $note) <li class="list-group-item"> <a href="{{ url('edit', [$note->slug]) }}"> {{ $note->title }} </a> <span class="pull-right">{{ $note->updated_at->diffForHumans() }}</span> </li> @endforeach </ul> @endif </div> </div> </div> </div> </div> @endsection The simply displays a message if the user has not created any notes and a link to create a new note. Otherwise it will display all the notes created by the user in a list. Edit note view Let’s create the edit view which will allow users to edit a note. Within the notes directory, create a new file named edit.blade.php and paste the code below into it: <!-- resources/views/notes/edit.blade.php --> @extends('layouts.app') @section('content') <div class="container"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <edit-note :</edit-note> </div> </div> </div> @endsection You will notice we’re using a custom tag with the edit view, this is our view component which we’ll create shortly. Now let’s create a Vue component. Create a new file named EditNote.vue within resources/assets/js/components directory and paste the code below to it: // resources/assets/js/components/EditNote.vue <template> <div class="panel panel-default"> <div class="panel-heading">Edit note</div> <div class="panel-body"> <div class="form-group"> <input type="text" class="form-control" v- </div> <div class="form-group"> <textarea class="form-control" rows="15" v-</textarea> </div> <button class="btn btn-primary pull-right" @Save</button> <p> Users editing this note: <span class="badge">{{ usersEditing.length }}</span> <span class="label label-success" v-</span> </p> </div> </div> </template> <script> export default { props: [ 'note', ], data() { return { title: this.note.title, body: this.note.body, usersEditing: [], status: '' } }, mounted() { Echo.join(`note.${this.note.slug}`) .here(users => { this.usersEditing = users; }) .joining(user => { this.usersEditing.push(user); }) .leaving(user => { this.usersEditing = this.usersEditing.filter(u => u != user); }) .listenForWhisper('editing', (e) => { this.title = e.title; this.body = e.body; }) .listenForWhisper('saved', (e) => { this.status = e.status; // clear is status after 1s setTimeout(() => { this.status = ''; }, 1000); }); }, methods: { editingNote() { let channel = Echo.join(`note.${this.note.slug}`); // show changes after 1s setTimeout(() => { channel.whisper('editing', { title: this.title, body: this.body }); }, 1000); }, updateNote() { let note = { title: this.title, body: this.body }; // persist to database axios.patch(`/edit/${this.note.slug}`, note) .then(response => { // show saved status this.status = response.data; // clear is status after 1s setTimeout(() => { this.status = ''; }, 1000); // show saved status to others Echo.join(`note.${this.note.slug}`) .whisper('saved', { status: response.data }); }); } } } </script> Let’s explain each piece of the code. Just like we have in the ‘create new note’ form, the template section has two input fields: title and body. Each field is bound to data (title and body respectively). Once a user starts typing (that is, a keydown event) in any of the input fields, the editingNote method will be triggered. Also, when the save button is clicked, the updateNote method will be triggered. (We’ll take a close look at these methods soon) Lastly on the template section, we display the number of users who are currently editing the specified note and also display a status message once the save button is clicked. Moving to the script section of the component, first we define a property for the component called note. This note property will be the note that is currently being edited. Recall from the edit view where we used the EditNote component, you will notice we passed the whole note object as the component’s note property. Next we define some data, the title and the body data are bound to respective input fields, the usersEditing will be an array of users editing the note and status will serve as an indicator for when a note’s edits have been persisted to the database. The mount method will be triggered immediately the component is mounted, so it’s a nice place to subscribe and listen to a channel. In our case, because we to be able to keep track of users editing a note, we’ll make use of Pusher’s presence channel. Using Laravel Echo, we can subscribe to a presence channel using Echo.join('channel-name'). As you can see our channel name is note.note-slug. Once we subscribe to a presence channel, we can get all the users that are subscribed to the channel with the here method where we simply assign the subscribed users to the usersEditing array. When a user joins the channel, we simply add that user to the usersEditing array. Similarly, when a user leaves the channel, we remove that user from the usersEditing array. To display edits in realtime to other users, we listen for client events that are triggered as a user types using listenForWhisper and update the form data accordingly. In the same vein, we listen for when edits are saved and display the “Saved!” status to other users, then after a second we clear the status message. Next, we define the methods we talked about earlier. The editingNote method simply triggers a client event to all users currently subscribed to the channel after a specified time (1 second). The updateNote method on the other hand sends a PATCH request with the edits made to persist the edits to the database. Once the request is successful, we display the message saved status to the user that made the save and clear the status message after 1 second. Lastly, we trigger a client event so other users can also see the message saved status. Since we created a presence channel, only authenticated users will be able to subscribe and listen on the note channel. So, we need a way to authorize that the currently authenticated user can actually subscribe and listen on the channel. This can be done in the routes/channels.php file: // routes/channels.php Broadcast::channel('note.{slug}', function ($user, $slug) { return [ 'id' => $user->id, 'name' => $user->name ]; }); We pass to the channel(), the name of our channel and a callback function that will return the details of the user if the current user is authenticated. Now let’s register our new created component with our Vue instance, open resources/assets/js/app.js and add the line below just before Vue instantiation: // resources/assets/js/app.js Vue.component('edit-note', require('./components/EditNote.vue')); Before testing out our online collaborative note app, we need to compile the JavaScript files using Laravel Mix using: npm run dev Now we can start our note app by running: php artisan serve Conclusion We have seen how to build a simple online collaborative note app using Laravel and Pusher. Sure there are other ways of accomplishing what we did in this tutorial, but we have seen how to build a collaborative app with Pusher’s realtime features. Also you will notice our note app doesn’t account for cases like concurrent editing of notes; to achieve that you’d want to look into Operational Transformation. April 5, 2017 by Chimezie Enyinnaya
https://pusher.com/tutorials/collaborative-note-app-laravel
CC-MAIN-2021-25
refinedweb
2,687
54.93
Xcode calendar access error - TheRealBret I am trying to add an appointment to the native iOS calendar. I have the following code which I took from this forum last year. from objc_util import * from ctypes import POINTER import threading import calendar, time store = ObjCClass('EKEventStore').alloc().init()WithTimeIntervalSince1970_(time.mktime(time.strptime('2017-06-18 12:00:00', '%Y-%m-%d %H:%M:%S'))) event.endDate = ObjCClass('NSDate').dateWithTimeIntervalSince1970_(time.mktime(time.strptime('2017-06-18 13:00:00', '%Y-%m-%d %H:%M:%S'))) event.setCalendar_(store.defaultCalendarForNewEvents()) LP_c_void_p = POINTER(c_void_p) err = LP_c_void_p() store.saveEvent_span_error_(event, 0, err) The code works just fine in Pythonista. A new calendar event is created, after optionally asking for permission to the calendar. I've run this code in both Pythonista 2.7 and 3. When I copy/paste this exact code into my Xcode project, it crashes when trying to set a value to the event.title parameter - I inserted console alerts between each line and it doesn't appear after this line. I'm not an expert in the objective C interface, so I'm perplexed why it works in the app but not in Xcode. Any help is appreciated. Thanks! You need to change the Info.plistfile to indicate how your app is using calendar data (the text that's shown in the permission dialog). From Apple's documentation: An iOS app linked on or after iOS 10.0 must include in its Info.plist file the usage description keys for the types of data it needs to access or it will crash. To access Reminders and Calendar data specifically, it must include NSRemindersUsageDescription and NSCalendarsUsageDescription, respectively. - TheRealBret I already have the calendar and reminder descriptions set to "${PRODUCT_NAME} needs X access"), and I am able to add a reminder, using the built-in Pythonista reminders library. It also passes the initial code which forces a check. I've confirmed in the settings and I do have the access to calendar in the 'on' state.
https://forum.omz-software.com/topic/4142/xcode-calendar-access-error
CC-MAIN-2021-10
refinedweb
335
50.53
Issues img "scale" option is broken for HTML output BUG: When trying to scale HTML images, The _images outdir is *NOT* populated at the time BaseTranslator.visit_image() is called. HTML image scaling used to work about a year ago. Looking at the repository, I don't see what broke things. Here's the details, in Sphinx-1.1.2-py2.7.egg/sphinx/writers/html.py: def visit_image(self, node): olduri = node['uri'] # rewrite the URI if the environment knows about it if olduri in self.builder.images: node['uri'] = posixpath.join(self.builder.imgpath, self.builder.images[olduri]) ... if node.has_key('scale'): if Image and not (node.has_key('width') and node.has_key('height')): try: im = Image.open(os.path.join(self.builder.srcdir, olduri)) except (IOError, # Source image can't be found or opened UnicodeError): # PIL doesn't like Unicode paths. pass else: if not node.has_key('width'): node['width'] = str(im.size[0]) if not node.has_key('height'): node['height'] = str(im.size[1]) del im BaseTranslator.visit_image(self, node) Just before the BaseTranslator.visit_image() call, using the debugger we see: >>> self.builder.srcdir 'C:\\Sphinx' >>>self.builder.outdir 'C:\\Sphinx\\_build\\html' >>> olduri u'images/dependency_walker.png' >>> node['uri'] u'_images/dependency_walker.png' >>> os.getcwd() 'C:\\Sphinx' And in docutils-0.8.1-py2.7.egg/docutils/writers/html4css1/init.py: def visit_image(self, node): atts = {} uri = node['uri'] ... if 'scale' in node: if Image and not ('width' in node and 'height' in node): try: im = Image.open(str(uri)) except (IOError, # Source image can't be found or opened UnicodeError): # PIL doesn't like Unicode paths. pass else: if 'width' not in atts: atts['width'] = str(im.size[0]) if 'height' not in atts: atts['height'] = str(im.size[1]) del im for att_name in 'width', 'height': if att_name in atts: match = re.match(r'([0-9.]+)(\S*)$', atts[att_name]) assert match atts[att_name] = '%s%s' % ( float(match.group(1)) * (float(node['scale']) / 100), match.group(2)) When this statement happens: im = Image.open(str(uri)) the debugger shows; >>> uri u'_images/dependency_walker.png' >>> os.getcwd() 'C:\\Sphinx' Which is what the correct uri WILL be, but if you look at 'C:\Sphinx\_build\html' at this point you see .html files and a _sources directory but NO _images directory. So, the Image.open() call fails, and the scaling doesn't occur. Since Sphinx can't change the docutils source, the solution is the _images outdir has to be populated before calling BaseTranslator.visit_image(). But I don't know how to do that? The workaround is to use the :height: or :width: img options instead of :scale:, since using those options doesn't require opening the image first. But the code in Sphinx' visit_image is exactly there for this reason: it figures out height and width on its own (with the correct file name) so that docutils doesn't have to. So this cannot be the reason why scaling doesn't work for you (btw, it does for me here.) Can you make a reproducible test case? Ah, the problem is the following: When I look at this in the debugger, Image is None. It looks like when easy-install is used to install PIL then the following doesn't work: but does work. So a fix is to change Sphinx-1.1.2-py2.7.egg/sphinx/writers/html.py: to: which is what docutils-0.8.1-py2.7.egg/docutils/writers/html4css1/init.py does. Then the :scale: option works as expected. As noted elsewhere (and confirmed by looking at Sphinx-1.1.3-py2.7.egg\EGG-INFO\requires.txt), easy-install doesn't automatically add PIL when it installs Sphinx (and neither does docutils). Maybe this is a good thing but then the documentation should say near the discussion of the image directive that you need to manually install PIL if you want the :scale: option to work. This is a slightly subtle bug since images seem to work fine until you try the :scale: option. I ran into the following error trying to build the scikit-learn sphinx docu. How can I check what the origin of the issue is? There are :scale:commands in the rst source, but I'm not sure if this is the problem. Is it covered by this ticket or should I file a separate docutils or sphinx ticket? Ran into this building scipy-lecture-notes. I confirm that the fix from Anonymous works for me. If you install PIL with easy_install, then from PIL import Image doesn't work, but import Image does. See for why. However, the right fix is not to replace one import with the other, but support both. Because which one works is system dependent. More robust fix: try: from PIL import Image # check for the Python Imaging Library except ImportError: try: import Image # other way of importing PIL except ImportError: Image = None Christoph, this fixes the scikit-learn doc for me. Apologies for the messed up formatting. hmm, i added the import Image thing, i alos added the workaround from issue #1004but i am still getting the same traceback as the one from the comment above. I just got bit by this too. :scale: doesn't work. I scratched my head for a while trying to get it to work before finding this bug report. I ended up just using :width: This issue is tangled enough. ticket #1004 no attribute 'file_insertion_enabled'error has been fixed and has been released in Sphinx-1.2. ticket #920 can't import PILerror has been fixed and has been released in Sphinx-1.2. Unfortunately, Sphinx-1.1.3 might have this issue and we don't have a release plan for Sphinx-1.1.4. Please use Sphinx-1.2 or later and newest Pillow the successor of PIL. If you still have a issue or you need some help, please create another ticket with your using versions and environment information. Many thanks for your reporting and your kind cooperation!
https://bitbucket.org/birkenfeld/sphinx/issue/883/img-scale-option-is-broken-for-html-output
CC-MAIN-2014-23
refinedweb
1,008
68.87
On Sun, Jun 27, 2010 at 8:25 PM, Nick Coghlan <ncoghlan at gmail.com> wrote: > The availability of "nonlocal" binding semantics also makes the > semantics much easier to define than they were in those previous > discussions (the lack of clear semantics for name binding statements > with an attached local namespace was the major factor blocking > creation of a reference implementation for this proposal back then). > > For example: > > c = sqrt(a*a + b*b) where: > a = retrieve_a() > b = retrieve_b() > > could translate to something like: > > def _anon(): # *(see below) > nonlocal c > a = retrieve_a() > b = retrieve_b() > c = sqrt(a*a + b*b) > _anon() > > *(unlike Python code, the compiler can make truly anonymous functions > by storing them solely on the VM stack. It already does this when > executing class definitions): I like this idea, but I would tweak it slightly. Maybe we should say EXPRESSION where: BLOCK is equivalent to def _(): BLOCK return EXPRESSION _() That way, c = a where: a = 7 would be equivalent to def _(): a = 7 return a c = _() One advantage of this equivalence is it would make it easier to work around a longstanding scoping gotcha. A naïve coder might expect this code to print out numbers 0 to 4: >>> fs = [] >>> for n in range(5): ... def f(): ... print(item) ... fs.append(f) ... >>> [f() for f in fs] 4 4 4 4 4 [None, None, None, None, None] I think we all have enough experience to know this isn’t a totally unrealistic scenario. I personally stumbled into when I was trying to create a class by looping through a set of method names. To get around it, one could use a where clause like so: fs = [] for n in range(5): fs.append(f) where: shadow = n def f(): print(shadow) This would print out 0 to 4 as expected and be equivalent to >>> fs = [] >>> for n in range(5): ... def _(): ... shadow = n ... def f(): ... print(shadow) ... fs.append(f) ... _() ... >>> [f() for f in fs] 0 1 2 3 4 [None, None, None, None, None] I think a where-clause with def-like namespace semantics would be a positive addition to Python, once the moratorium is up. -- Carl Johnson
https://mail.python.org/pipermail/python-ideas/2010-July/007566.html
CC-MAIN-2021-49
refinedweb
366
56.73
They're relatively easy to manage. The __init__.py file (to make a module into a package) is very elegant. And stuff can be put into the __init__.py file to create a kind of top-level or header module in a larger package of modules. To a limit. It took hours, but I found the edge of the envelope. The hard way. We have a package with about 10 distinct Django apps. Each Django app is -- itself -- a package. Nothing surprising or difficult here. At first, just one of those apps used a couple of fancy security-related functions to assure that only certain people could see certain things in the view. It turns out that merely being logged in (and a member of the right group) isn't enough. We have some additional context choices that you must make. The view functions wind up with a structure that looks like this. @login_required def someView( request, object_id, context_from_URL ): no_good = check_other_context( context_from_URL ) if no_good is not None: return no_good still_no_good = check_session() if still_no_good is not None: return still_no_good # you get the idea At first, just one app had this feature. Then, it grew. Now several apps need to use check_session and check_other_context. Where to Put The Common Code? So, now we have the standard architectural problem of refactoring upwards. We need to move these functions somewhere accessible. It's above the original app, and into the package of apps. The dumb, obvious choice is the package-level __init__.py file. Why this is dumb isn't obvious -- at first. This file is implicitly imported. Doesn't seem like a bad thing. With one exception. The settings. If the settings file is in a package, and the package-level __init__.py file has any Django stuff in it -- any at all -- that stuff will be imported before your settings have finished being imported. Settings are loaded lazily -- as late as possible. However, in the process of loading settings, there are defaults, and Django may have to use those defaults in order to finish the import of your settings. This leads to the weird situation that Django is clearly ignoring fundamental things like DATABASE_ENGINE and similar settings. You get the dummy database engine, Yet, a basic from django.conf import settings; print settings.DATABASE_ENGINE shows that you should have your expected database. Moral Of the Story Nothing with any Django imports can go into the package-level __init__.py files that may get brought in while importing settings.
http://slott-softwarearchitect.blogspot.com/2009/10/painful-python-import-lessons.html
CC-MAIN-2018-26
refinedweb
416
68.67
>Number: 2113 >Category: config >Synopsis: HTTP Server Rebuild Line Needs Changing for the better >Confidential: no >Severity: non-critical >Priority: medium >Responsible: apache >State: open >Class: change-request >Submitter-Id: apache >Arrival-Date: Wed Apr 22 06:10:01 PDT 1998 >Last-Modified: >Originator: mdpc@netcom.com >Organization: apache >Release: 1.3b6 >Environment: UNIX >Description: When you perform a recompilation, the current buildmark stuff inserts only the C compiler date and time. One problem with this is that this is not the "standard" UNIX date/time format....The other problem is that one is unable to more easily assign a number to the build like most UNIX system or software generations. There should be a counter that increments each time a httpd is rebuilt. The attached set of context diffs into the .../src/ area addresses the above problems. Basically a simple /bin/sh program is called creating an include file containing the string. A few minor changes are required to do this (see the patch below). The string for the server built date looks like Tue Apr 21 08:48:41 PDT 1998 - Build #14 Which is nicer. >How-To-Repeat: All the time.... >Fix: *** Makefile.tmpl.dist Mon Apr 20 17:27:36 1998 --- Makefile.tmpl Mon Apr 20 17:34:30 1998 *************** *** 26,31 **** --- 26,32 ---- $(TARGET): $(SUBTARGET) target_static: subdirs modules.o + /bin/sh buildmark.sh $(CC) -c $(INCLUDES) $(CFLAGS) $(SPACER) buildmark.c $(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_SHLIB_EXPORT) \ -o $(TARGET) buildmark.o $(OBJS) $(REGLIB) $(LIBS) *** buildmark.c.dist Mon Apr 20 17:22:17 1998 --- buildmark.c Mon Apr 20 17:25:06 1998 *************** *** 57,68 **** #include "conf.h" #include "httpd.h" - #if defined(__DATE__) && defined(__TIME__) - static const char server_built[] = __DATE__ " " __TIME__; - #else - static const char server_built[] = "unknown"; - #endif static const char server_version[] = SERVER_VERSION; API_EXPORT(const char *) ap_get_server_built() --- 57,64 ---- #include "conf.h" #include "httpd.h" + #include "buildmark.h" static const char server_version[] = SERVER_VERSION; API_EXPORT(const char *) ap_get_server_built() *** buildmark.count.dist Wed Apr 22 05:54:40 1998 --- buildmark.count Tue Apr 21 16:21:36 1998 *************** *** 0 **** --- 1 ---- + 1 *** buildmark.sh.dist Wed Apr 22 05:55:15 1998 --- buildmark.sh Mon Apr 20 17:24:31 1998 *************** *** 0 **** --- 1,9 ---- + #! /bin/sh + set -uh + datx=`date` + versx=`cat buildmark.count` + versx=`expr $versx + 1` + echo .....Making Build Number $versx 1>&2 + echo $versx >buildmark.count + echo 'const char server_built[] = "'"$datx"' - Build #'"$versx"'";' >buildmark.h + exit 0 . ]
http://mail-archives.apache.org/mod_mbox/www-apache-bugdb/199804.mbox/%3C19980422130932.9807.qmail@hyperreal.org%3E
CC-MAIN-2018-09
refinedweb
400
61.73
i18n-key and local-storage hard-coded in xenapi Bug Description In order to install/spawn instances, the XenAPI plugin looks for a specific SR on a XenServer/XCP host. More precisely, the plugin assumes the presence of a local SR, as per default XenServer install; this SR has the "other-config" parameter set to "i18n-key= If we look at the code (available in /virt/xenapi/ @classmethod def find_sr(cls, session): """Return the storage repository to hold VM images""" host = session. for sr_ref, sr_rec in cls.get_ if not ('i18n-key' in sr_rec[ for pbd_ref in sr_rec['PBDs']: if pbd_rec and pbd_rec['host'] == host: return None Many people stumbled upon this, recently and in the past (eg. https:/ Reviewed: https:/ Committed: http:// Submitter: Jenkins Branch: master commit c56d677a7313b8b Author: Armando Migliaccio <email address hidden> Date: Wed Jan 25 00:35:34 2012 +0000 bug 921087: i18n-key and local-storage hard-coded in xenapi This fix introduces a new flag 'sr_matching_ 'other- SR on which to install guest instances. The default value is the Local Storage in default XenServer/XCP installations, and it is what was hard-coded so far. To select an SR with a different matching criteria, this flag can be set to 'other- the Default SR, as displayed by XenCenter and as returned by xenapi. This changeset also makes a small code simplification along the way. Change-Id: Ia5ee438389c59a Fix proposed to branch: master /review. openstack. org/3380 Review: https:/
https://bugs.launchpad.net/nova/+bug/921087
CC-MAIN-2016-26
refinedweb
244
57.4
ScottGu just mentioned it: With the release of Visual Studio 2008 Microsoft is going to release the source for the major namespaces of the .net Framework. This is a huge annoucement for the .net community, it will make developing so much easier. I don't know how many times I had to open refactor to checkout the source for certain namespaces. Now if we could just update the source :) I'm with Frans Bouma on this one - weblogs.asp.net/.../don-t-look-at-the-sourcecode-of-net-licensed-under-the-reference-license.aspx. I am not sure about that. I think its still a huge benefit for the community to have the source - even if are not able to modify it. But who knows maybe one day MS will allow us to do so. I would have never thought that MS would release the source for the .net framework and now its here. Link to us All material is copyrighted by its respective authors. Site design and layout is copyrighted by DotNetSlackers. Advertising Software by Ban Man Pro
http://dotnetslackers.com/Community/blogs/sonukapoor/archive/2007/10/03/microsoft-releases-the-source-for-the-net-library.aspx
CC-MAIN-2014-15
refinedweb
179
75.81
This appendix describes the overall structure of cvs commands, and describes some commands in detail (others are described elsewhere; for a quick reference to cvs commands, see node Invoking CVS\(a\(aq in the CVS manual). For example the following line in .cvsrc causes cvs to use compression level 6. The available cvs_options (that are given to the left of cvs_command) are:, rdiff, rtag, and update commands. (The history command uses this option in a slightly different way; see node history options\(aq in the CVS manual). A wide variety of date formats are supported by cvs. The most standard ones are ISO8601 (from the International Standards Organization) and the Internet RFC1123). ISO8601 dates have many variants but a few examples are: There are a lot more ISO8601 date formats, and cvs accepts many of them, but you probably don\(aqt\(aqt interpret spaces as argument separators. A command using the -D flag can look like this:\(aq in the CVS manual, and see node Removing files\(aq in the CVS manual. The -k option is available with the add, update commands. Available with the following commands: annotate, checkout, commit, diff, edit, editors, export, log, rdiff, remove, rtag, status, tag, unedit, update, watch, and watchers. Available with the following commands: add, commit and import. This is not the same as the cvs -n program option, which you can specify to the left of a cvs command! Available with the checkout, export, and rtag commands. Available with the following commands: annotate, checkout, commit, diff, edit, editors, export, rdiff, remove,\(aq in the CVS manual). The tag can be either a symbolic or numeric tag, as described in see node Tags\(aq in the CVS manual, or the name of a branch, as described in see node Branching and merging\(aq in the CVS manual., checkout, commit, diff, history, export, rdiff, rtag, and update commands.). Note that this command can be quite dangerous unless you know exactly what you are doing (for example see the warnings below about how the rev1:rev2 syntax is confusing). If you are short on disc this option might help you. But think twice before using itthere\(aqt very useful, in the future it may change to be like the :: case. Due to the way cvs handles branches rev cannot be specified symbolically if it is a branch. see node Magic branch numbers\(aq\(a\(aq\(aqt tell you anything about lines which have been deleted or replaced; you need to use cvs diff for that (see node diff\(aq in the CVS manual). The options to cvs annotate are listed in see node Invoking CVS\(aq in the CVS manual, and can be used to select the files and revisions to annotate. The options are described in more detail there and in see node Common options\(aq in the CVS manual. node modules\(aq checkout are created read-write, unless the -r option to cvs (see node Global options\(aq in the CVS manual) is specified, the CVSREAD environment variable is specified (see node Environment variables\(aq in the CVS manual), or a watch is in effect for that file (see node Watches\(aq forget to change your directory to the top level directory. For the output produced by the checkout command see see node update output\(aq in the CVS manual. These standard options are supported by checkout (see node Common options\(aq\(aqt contain empty intermediate directories. In this case only, cvs tries to ``shorten\(aq\(aq node Branching and merging\(aq in the CVS manual. Get a copy of the module tc: Get a copy of the module tc as it looked one day ago: Use commit when you want to incorporate changes from your working source files into the source repository. If you don\(aqt node update\(aq\(aq in the CVS manual, and see node loginfo\(aq in the CVS manual) and placed in the rcs file inside the repository. This log message can be retrieved with the log command; see see node log\(aq in the CVS manual. You can specify the log message on the command line with the -m message option, and thus avoid the editor invocation, or use the -F file option to specify that the argument file contains the log message. These standard options are supported by commit (see node Common options\(aq in the CVS manual, for a complete description of them): commit also supports these options: Force cvs to commit a new revision even if you haven\(aqt. You can commit to a branch revision (one that has an even number of dots) with the -r option. To create a branch revision, use the -b option of the rtag or tag commands (see node Branching and merging\(aq in the CVS manual). Then, either checkout or\(aq in the CVS manual. These standard options are supported by diff (see node Common options\(aq in the CVS manual, for a complete description of them):\(aq\(aq style. To specify a line group format, use. In a line format, ordinary characters represent themselves; conversion specifications start with % and have one of the following forms.\(aqs normal format. You can tailor this command to get fine control over diff\(aqs\(aqt handle an export containing binary files correctly. Also be aware that after having used -kv, one can no longer use the ident command (which is part of the rcs suitesee ident(1)) which looks for keyword strings. If you want to be able to use ident you must not use -kv. These standard options are supported by export (see node Common options\(aq. history uses -f, -l, -n, and -p in ways that conflict with the normal use inside cvs (see node Common options\(aq\(aq in the CVS manual,\(aq in the CVS manual), it does not import it and prints I followed by the filename (see node import output\(aq\(aq node Getting the source\(aq in the CVS manual). This standard option is supported by import (see node Common options\(aq in the CVS manual, for a complete description): There are the following additional special options. name can be a file name pattern of the same type that you can specify in the .cvsignore file. see node cvsignore\(aq in the CVS manual. spec can be a file name pattern of the same type that you can specify in the .cvswrappers file. see node Wrappers\(aq in the CVS manual. import keeps you informed of its progress by printing a line for each file, preceded by one character indicating the status of the file: See see node Tracking sources\(aq in the CVS manual, and see node From files\(aq in the CVS manual. node Common options\(aq in the CVS manual).. \(aqt node Common options\(aq in the CVS manual, for a complete description of them): In addition to the above, these options are available:\(aqt lock files, it isn\(aqt strictly necessary to use this command. You can always simply delete your working directory, if you like; but you risk losing changes you may have forgotten, and you leave no trace in the cvs history file (see node history file\(aq in the CVS manual) that you\(aqve abandoned your: WARNING: The release command deletes all directories and files recursively. This has the very serious side-effect that any directory that you have created inside your checked-out sources, and not added to the repository (using the add command; see node Adding files\(aq in the CVS manual) will be silently deletedeven if it is non-empty! Before release releases your sources it will print a one-line message for any file that is not up-to-date. Release the tc directory, and delete your local working copy of the files. After you\(aqve node Common options\(aq\(aq in the CVS manual, for more. update and checkout keep you informed of their progress by printing a line for each file, preceded by one character indicating the status of the file: M can indicate one of two states for a file you\(aq). Advertisements
http://www.tutorialspoint.com/unix_commands/cvs.htm
CC-MAIN-2014-35
refinedweb
1,353
57.91
Add VIF net-id in virtual interfaces list API Response¶ There is difference in virtual interfaces API response between v2 and v2.1. VIF net_id is not included in v2.1 response. This spec proposes to add VIF net_id as microversion in v2.1 API. Problem description¶ V2 API has extension for virtual interface ‘OS-EXT-VIF-NET’ which adds OS-EXT-VIF-NET:net_id” in virtual interfaces list response. But during porting the v2 extensions to v2.1, this extension was missed. Because of this there is difference between v2 and v2.1 response of virtual interface API. v2 List virtual interface Response (with all extension enable): { "virtual_interfaces": [ { "id": "%(id)s", "mac_address": "%(mac_addr)s", "OS-EXT-VIF-NET:net_id": "%(id)s" } ] } v2.1 List virtual interface Response: { "virtual_interfaces": [ { "id": "%(id)s", "mac_address": "%(mac_addr)s" } ] } Attribute “OS-EXT-VIF-NET:net_id” is missing in v2.1. Users who need VIFs’ net-id, would not be able to get it from v2.1. This is bug 1 in v2.1 base API but cannot be fixed as bug because v2.1 is released in kilo and as per API contract its too late to fix this as bug in v2.1 base API. Another problem is that v2.1 extension-list also returns ‘OS-EXT-VIF-NET’ extension, which gives false message to users that this extension is also loaded in v2.1 which is actually not true due to problem described above. Removal of this extension from v2.1 extension list should be done in v2.1 base API and back-ported to stable kilo branch as proposed in 2. Use Cases¶ User who need VIFs’ net-id information and getting the same from v2 APIs, should be able to get from v2.1 API also. By adding this information, users can determine in which network a vif is plugged into. Proposed change¶ This spec propose to fix this bug as microverion by adding VIF net-id information in virtual interfaces list Response. v2.1 List virtual interface Response: Current: { "virtual_interfaces": [ { "id": "%(id)s", "mac_address": "%(mac_addr)s" } ] } After: { "virtual_interfaces": [ { "id": "%(id)s", "mac_address": "%(mac_addr)s", "net_id": "%(id)s" } ] } Attribute “net_id” will be added in Response. NOTE- Attribute name “OS-EXT-VIF-NET:net_id” (in v2) has been changed to “net_id”. Because this attribute is being added as microversion and as per guidlines 3, we should not add namespace to new attribute name unlike v2 where it was added as extension. Alternatives¶ As alternate we can fix this as bug in v2.1 base without microversion so that v2.1 will be exactly same as v2. But that breaks API contract as v2.1 is already released. REST API impact¶ New attribute VIF net-id will be added as microversion. Specification for the method Description API Virtual Interface List Method type GET Normal http response code 200, no change in response code Expected error http response code(s) No change in error codes URL for the resource ‘servers/<server_uuid>/os-virtual-interfaces’ JSON schema definition for the body data if allowed A request body is not allowed. JSON schema definition for the response data if any { 'status_code': [200], 'response_body': { 'type': 'object', 'properties': { 'virtual_interfaces': { 'type': 'array', 'items': { 'type': 'object', 'properties': { 'id': {'type': 'string'}, 'mac_address': {'type': 'string'}, 'net_id': {'type': 'string'} } 'required': ['id', 'mac_address', 'net_id'] } } } 'required': ['virtual_interfaces'] } } Other end user impact¶ python-novaclient needs to be updated in order to show VIF ‘net_id’ in corresponding command for v2.1 + microversion. Implementation¶ Testing¶ Currently Nova functional test will cover these changes testing. After discussion of micro version testing in Tempest, these changes can be tested accordingly.
https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/add-vif-net-id-in-vif-list.html
CC-MAIN-2021-17
refinedweb
601
57.16
is it possible to use an if statement using sleep e.g if Thread.Sleep > 300000 if not how can i overcome this Printable View is it possible to use an if statement using sleep e.g if Thread.Sleep > 300000 if not how can i overcome this No it is not possible to do that. But I can see what your trying to do and you can get the desired result like this: Code : public class Dave { /** * JavaProgrammingForums.com */ public static void main(String[] args) throws Exception { int sleepTime = 3500; //Thread.sleep(sleepTime); if(sleepTime > 3000){ // Do something } } } I think he wants to know how long a certain thread has been sleeping for. You'll at least need 2 static/object variables I think. One to hold the Thread and another to hold the time sleep started. The Thread variable isn't entirely necessary, but is if you want to do something with that Thread (like wake it). Code : import java.util.Calendar; public class Dave implements Runnable { Thread running; long start; public static void main (String[] args) throws InterruptedException { new Dave().myInit(); } public void myInit () throws InterruptedException { // run a thread to be interrupted running = new Thread(this); running.start(); // wait for thread to go into timed waiting while (running.getState() != Thread.State.TIMED_WAITING) { } // a little delay try { Thread.sleep(1000); } catch (InterruptedException e) { } System.out.println("interrupting another thread"); running.interrupt(); // run a thread that will not be interrupted new Thread(this).start(); } public void run () { start = Calendar.getInstance().getTimeInMillis(); try { System.out.println("Sleeping"); Thread.sleep(3000); System.out.println("I was not interrupted"); System.out.println("I sleept for " + (Calendar.getInstance().getTimeInMillis() - start)); } catch (InterruptedException e) { System.out.println("I was interrupted!"); System.out.println("I sleept for " + (Calendar.getInstance().getTimeInMillis() - start)); } } }
http://www.javaprogrammingforums.com/%20loops-control-statements/851-if-statement-printingthethread.html
CC-MAIN-2016-30
refinedweb
296
53.47
metadata tracked by the Sencha Cmd compiler has a variety of uses, some of which we will examine in this guide. To support these uses, the compiler can export and format this metadata in several different ways, which we will cover here as well. The following guides are recommended reading before proceeding further: meta One of the major new dimensions provided by the compiler is its ability to export metadata in various formats. This feature is used to produce the ext.js "bootstrap" file that contains various classes and a block of metadata about all of the files in the framework. There are several forms of metadata that the compiler can export using the meta command: Note. This process is handled automatically for applications generated by Sencha Cmd. If you are not using Sencha Cmd generated application, this section describes how to achieve the same results manually. The primary use for the meta command is to create your own "bootstrap" file. This file gives the framework the same level of awareness of your application code that it has of the framework code itself. The simplest way to manage your bootstrap file is to store it alongside your markup file. If that won't work for you, read on to see how to manage relative paths. If you have your markup file in a source folder in your classpath, you need to tell the compiler to ignore the bootstrap file. Do this using the -ignore switch: sencha compile -classpath=sdk/src,app -ignore bootstrap.js \ ... requires The end of "ext-debug.js" contains these two function calls: Ext.ClassManager.addNameAlternateMappings({ "Ext.draw.engine.ImageExporter": [], "Ext.layout.component.Auto": [], ... }); Ext.ClassManager.addNameAliasMappings({ "Ext.draw.engine.ImageExporter": [], "Ext.layout.component.Auto": [ "layout.autocomponent" ], ... }); It is the presence of these two pieces of metadata that allow wildcards to be used in requires statements. That is: Ext.define('MyApp.App', { requires: [ 'Ext.grid.*' ], ... }); All that is required to use wildcards in your own code is to provide the same bootstrap data for your app. This command will produce a file that does just that: sencha compile -classpath=app \ meta -alias -out bootstrap.js and \ meta -alt -append -out bootstrap.js The above command line tells the compiler to read in the source in the app folder and generate two pieces of metadata. The second piece of metadata is written to the same output file as the first, but using the -append option to append to the file and not replace it. Once you have the "bootstrap.js" file, change your page like so to add it to the x-bootstrap section: <html> <head> <!-- <x-compile> --> <!-- <x-bootstrap> --> <script src="../sdk/ext-dev.js" type="text/javascript"></script> <script src="bootstrap.js" type="text/javascript"></script> <!-- </x-bootstrap> --> <script src="app/app.js" type="text/javascript"></script> <!-- </x-compile> --> </head> <body></body> </html> The "bootstrap.js" file needs to be regenerated if you do any of the following: This rebuild of the bootstrap data can be handled in a variety of ways, but the fundamental question is whether to keep these files in source control or require developers to generate them locally. Both approaches work and can be automated to some degree. Note. For applications generated by Sencha Cmd, this is handled as part of the build process of sencha app build. Alternatively, refreshing just the bootstrap instead of performing a full build is accomplished by the sencha app refresh command. In large applications it can be helpful to organize your namespace using multiple source trees. In fact, Ext JS itself uses three source trees. This approach, however, has always presented problems for the dynamic loader requiring loader paths to be configured by hand to work around the issue. The compiler, however, has complete knowledge of class-to-file relationships given all of the source in the classpath. And the meta command can export that data for use in your application. If you are already sold on the above to create a "bootstrap.js", this data can be added by adding one more meta command (of course, the classpath will contain multiple folders in this case): sencha compile -classpath=src1,src2,src3 \ meta -alias -out bootstrap.js and \ meta -alt -append -out bootstrap.js and \ meta -loader -append -out bootstrap.js Now the "bootstrap.js" file solves both problems. With this approach, the following things will also require you to rebuild "bootstrap.js": Note. This part is also handled automatically for applications generated by Sencha Cmd. -base-path For many good reasons, paths need to be relative. Whenever you deal with relative paths, however, you need to solve the problem of where those relative paths are based. In the above examples we cheated a bit and placed the "bootstrap.js" file next to the markup file. This leverages the fact that the meta command defaults the base folder to the location of the output file. When this is not the case, you need to tell the meta command the base for determining relative paths. Let's say we want to move the "bootstrap.js" file in to the "build" folder (perhaps because we are not keeping it in source control). Since our page is in the current folder and our source is in the "app" folder, this will generate the proper relative paths: sencha compile -classpath=src1,src2,src3 \ meta -alias -out build/bootstrap.js and \ meta -alt -append -out build/bootstrap.js and \ meta -loader -append -base-path . -out build/bootstrap.js Since the -alias and -alt modes do not deal in paths, the -base-path option is only needed on the -loader use of the meta command. By default, the meta command exports metadata in JSONP format using a function call wrapper appropriate for the type of metadata requested. If a different function call is desired or you want the data in JSON format, you can request this in the meta command. In the example below, the aliases.json file will contain the alias data in JSON format. You cannot use -append in this case because JSON format requires a single, top-level object or array. sencha compile -classpath=src1,src2,src3 \ meta -alias -json -out aliases.json In this next example, we customize the JSONP wrapping by supplying the function to call: sencha compile -classpath=src1,src2,src3 \ meta -alias -jsonp Foo.bar.doAliases -out aliases.js This form can work with -append because it produces JavaScript code. The output of the above looks roughly like this: Foo.bar.doAliases( // ... the JSON data ... ); An occasionally useful form of metadata supported by the meta command is filename data. That is, the list of a files in the proper, dependency order. In many ways this is the same as the other meta data forms in that this data can be exported in JSON or JSONP format, and can be combined using -append. The first difference with -filenames is that the default format is text. To produce JSON or JSONP, you must specify one of the -json or -jsonp options. In the default mode of text, the filenames are written as lines of text, one filename per line. The following command will create "filenames.txt": sencha compile -classpath=src1,src2,src3 \ meta -filenames -out filenames.txt Each line of the file can be decorated using the -tpl option. Because of the special characters needed for this example, we use a response file to hold the template. We put this in "template.txt", like this: <script src="{0}" type="text/javascript"></script> Then run the following command. sencha compile -classpath=src1,src2,src3 \ meta -filenames -tpl @template.txt -out filenames.txt We now have a chunk of markup that will "script-tag in" all of the files in their correct order. For example: <script src="ext/src/ShadowPool.js" type="text/javascript"></script> <script src="ext/src/Shadow.js" type="text/javascript"></script> The compiler normally reads metadata such as classes, namespaces and dependencies by parsing source code. In situations where this is hidden, for example, when obfuscating a library, the compiler will be unaware of any defined classes or their dependencies. This form of metadata export can be used to provide the "symbols" for such libraries so that users can still compile their application using Sencha Cmd. sencha compile -classpath=src1,src2,src3 \ meta -definitions -out symbols.js The above creates a file that contains directives like this: // @define Foo.bar.Thing // @require Ext.panel.Panel // @uses Ext.layout.container.HBox These directives are recognized by the compiler and introduce the symbolic names needed for user code to compile. These symbols should be added to the obfuscated library file to ensure that the library code is concatenated in the right order.
https://docs.sencha.com/cmd/7.0.0/guides/advanced_cmd/cmd_metadata.html
CC-MAIN-2019-43
refinedweb
1,455
57.57
Displaying Inheritance Hierarchy in the Graph Window From RAD Studio Go Up to C++ Class Explorer Topics You can select several types in the Type List and view a graphical representation of their inheritance. The Graph window in the C++ Class Explorer can display the inheritance hierarchy of types regardless of whether the types are in the managed unit (paired .cpp/.h files) of your application. For example, you can select types from the VCL, and the Graph window displays their inheritance hierarchy. To display the inheritance hierarchy of types: - In C++Builder, open a C++ project that you want to examine. - Select View > C++ Class Explorer . - In the Type List, select several types whose inheritance you want to view. To select multiple noncontiguous types, use Ctrl+Click. - Click the Graph tab of the Source/References/Graph window. The inheritance hierarchy in the Graph window uses shapes as follows: - Elliptical shapes (ovals) enclose the symbols. - Rectangular shapes enclose groups (that is, symbols defined in the same file, such as .h or .hpp). Use the context menu commands on the Graph window to control the graphical representation as follows: - Arrows (controls direction of inheritance arrows: You can choose either Base to Derived or Derived to Base) - Group (controls whether source file groups are indicated: You can choose By File or None) - Orientation (You can choose from Top -> Bottom, Bottom -> Top, Left -> Right, or Right -> Left) - Ratio (controls the size of the graph in the window: You can choose either Window or Best Fit) - Zoom (enlarges or reduces the size of the graph: You can choose an increment between 25% and 400%) Screenshot: See C++ Class Explorer Overview. Tips - If you group the Type List by File ( ), Namespace ( ) or Custom Group ( ), you can select the node of the file, namespace or custom group that you want in the Type List. Then the Graph view displays the hierarchical relationship of the classes declared in the corresponding file, namespace or custom group. - The Type List allows you to multi-select only nodes of the same level, except when you have selected Group Types by Inheritance Hierarchy ( ) in the toolbar. In hierarchy mode, you can multi-select nodes at any level. This means that certain graphs are easier to generate in one grouping than another. For example, if you want to create a graph of just classes in your project, you would want to activate Custom grouping ( ). And if you want to create a graph of classes from two files or two namespaces, it is best to activate Group by Files ( ) and Group by Namespaces ( ), respectively. See Also - C++ Class Explorer Overview - C++ Class Explorer Window - Class Explorer (C++) options dialog - Viewing Members Declared in Classes and Interfaces with the C++ Class Explorer - - Saving View Settings on the C++ Class Explorer
http://docwiki.embarcadero.com/RADStudio/2010/en/Displaying_Inheritance_Hierarchy_in_the_Graph_Window
CC-MAIN-2013-20
refinedweb
466
56.59
python - i want to swap the first half and the second half of the array and put them in different arrays Replacement of # array of the program g → 0 not included g2 → 0 included unit Line 111 I want to insert the first 500 pieces of an array (iso0) with 1001 pieces after the second half 500 pieces. (Put the function set at -500 ^ -120500 ^ -12 in time as a periodic function in the array of 0500 ^ -121000 ^ -12) (Before conversion -500 ^ -12 ~ -1 ^ -12, 0, 1 ^ -12 ~ 500 ^ -12 (After conversion 0, 1 ^ -12 ~ 500 ^ -12, -500 ^ -12 ~ -1 ^ -12 Traceback (most recent call last): File "C: \ Users \ oreka \ python \ 2jigen3.py", line 115, in iso0g [i] = iso0 [i + 501] TypeError: can only concatenate tuple (not "int") to tuple What I triedWhat I tried import numpy as np import math import matplotlib.pyplot as plt #Constant declaration a = np.arange (-500.0e-12,600.0e-12,1.0e-12) # 500.0e-12 ~ 600.0e-12 time t (1100 elements) a2 = np.arange (1,1000,1) # 1 to 1000 integers (used to assign lines with 3 elements in one line of k) a3 = np.arange (-500.0e-12500.0e-12,1.0e-12) # -500.0e-12 to 500.0e-12 time t (1001 elements) a4 = np.arange (0,1000.0e-12,1.0e-12) #Time for graph t iso0g = np.arange (0,1001,1.0) # Insertion after sequence replacement isog = np.arange (0,1001,1.0) # Insertion after array replacement iso2g = np.arange (0,1001,1.0) # Insertion after sequence replacement g = np.arange (0,500,1.0) # 0 not included g2 = np.arange (501,1001,1.0) # 0 included k = np.eye (1001) #k base (identity matrix (1001 × 1001)) A = 100 B = 217.0e-28 z = 30.0e + 3 T = 18016836.0e-18 dz = 15000 Creating #p (t, z) and dp/dz p = pow (A * T, 2)/np.sqrt (pow (T, 4) + pow (B * z, 2)) * np.exp ((-1 * pow (T * a, 2))/(pow (T, 4) + pow (B * z, 2))) #p (a3 size) pa3 = pow (A * T, 2)/np.sqrt (pow (T, 4) + pow (B * z, 2)) * np.exp ((-1 * pow (T * a3, 2))/(pow (T, 4) + pow (B * z, 2))) # p1a3 (t + dz) p1a3 = pow (A * T, 2)/np.sqrt (pow (T, 4) + pow (B * (z + dz), 2)) * np.exp ((-1 * pow (T * a3,)) 2))/(pow (T, 4) + pow (B * (z + dz), 2))) # p2a3 (t-dz) p2a3 = pow (A * T, 2)/np.sqrt (pow (T, 4) + pow (B * (z --dz), 2)) * np.exp ((-1 * pow (T * a3,)) 2))/(pow (T, 4) + pow (B * (z --dz), 2))) dpdz = pow (A * T * B, 2) * z * ((2 * pow (T * a3,2))/(pow (T, 4) + pow (B * z, 2)) ―― 1) * pow (pow (T, 4) + pow (B * z, 2), -1.5) * np.exp (-1 * pow (T * a3,2)/(pow (T, 4) + pow (B * z, 2))) pp = (p1a3 --pa3)/dz # forward difference pp2 = (p1a3 --p2a3)/2/dz # center difference Transform the # dp/dz matrix into a vertical column dpdz = dpdz.reshape ([1001,1]) pp = pp.reshape ([1001,1]) pp2 = pp2.reshape ([1001,1]) Repeatedly assign p to rows 2 to 1000 of #k (identity matrix (1001 x 1001)) * Rows with 3 elements in one rowfor i in a2: k [i] [i-1] = p [i + 1] + p [i + 2] k [i] [i] =-(p [i + 1] + 2 * p [i + 2] + p [i + 3]) k [i] [i + 1] = p [i + 2] + p [i + 3] Substitute p in the 1st and 1001st lines of #k * A line with 2 elements in one line k [1000] [1000] =-(p [1001] + 2 * p [1002] + p [1003]) k [1000] [999] = p [1001] + p [1002] k [0] [0] =-(p [1] + 2 * p [2] + p [3]) k [0] [1] = p [2] + p [3] kin = np.linalg.inv (k) Inverse matrix kin of #k b = dpdz/dz/B # creation and b (dpdz) creation b2 = pp/dz/B # b2 (forward difference) b3 = pp2/dz/B # b3 (center difference) #For checking how many elements are contained in each array (shape) "" " print ("a") print (a.shape) print ("a2") print (a2.shape) print ("a3") print (a3.shape) print ("k") print (k.shape) print ("p") print (p.shape) print ("dpdz") print (dpdz.shape) print ("kin") print (kin.shape) "" " # For confirmation of each matrix (shape) np.set_printoptions (linewidth = 2000, edgeitems = 7, precision = 4, floatmode ='maxprec') print ("k") print (k) print ("p") print (p) print ("dpdz") print (dpdz) print ("b") print (b) print ("kin") print (kin) Calculate φ from the formula # 3.13 iso0 = kin @ b # iso0 b = dpdz iso0 = iso0.reshape ([1001,]) print ("iso0") print (iso0.shape)print (iso0) iso = kin @ b2 # iso b = forward difference iso = iso.reshape ([1001,]) print ("iso") print (iso.shape) print (iso) iso2 = kin @ b3 # iso2 b = center difference iso2 = iso2.reshape ([1001,]) print ("iso2") print (iso2.shape) print (iso2) #Swap array g → 0 not included g2 → 0 included for i in enumerate (g): iso0g [i] = iso0 [i + 501] isog [i] = iso [i + 501] iso2g [i] = iso2 [i + 501] for i in g2: iso0g [i] = iso0 [i-501] isog [i] = iso [i-501] iso2g [i] = iso2 [i-501] #iso for confirmation print ("iso0") print (iso0g) print (iso0g.shape) #PLOt of each matrix plt.figure (0) plt.xlabel ("t") plt.ylabel ("p (t, z)") plt.plot (a, p) plt.figure (1) plt.xlabel ("t") plt.ylabel ("b and pp") plt.plot (a3, dpdz) plt.plot (a3, pp) plt.plot (a3, pp2) plt.figure (2) plt.xlabel ("t") plt.ylabel ("φ") plt.plot (a4, iso0g) plt.plot (a4, isog) plt.plot (a4, iso2g) plt.show () I fixed it when I got an error before (make it float type), but after that it does not workSupplementary information (FW/tool version, etc.) Please provide more detailed information here. - Answer # 1 - Answer # 2 Two corrections are required. First of all, in the following, g and g2 need to be ints used in the loop, so adding .0 is not good for float. g = np.arange (0,500,1.0) # 0 not included g2 = np.arange (501,1001,1.0) # 0 included Revised g = np.arange (0,500,1) # 0 not included g2 = np.arange (501,1001,1) # 0 included Next, you don't need enumerate () in the following places. i becomes a tuple. #Swap array g → 0 not included g2 → 0 included for i in enumerate (g): Revised #Swap array g → 0 not included g2 → 0 included for i in g: Related articles - python - i want to swap the first half and the second half of the array and put them in different arrays - it's the second day of programming for the first time tell us about javascript - i want to make a python array one-dimensional, but it doesn't work - python - regarding the problem that the length of a list with a different name is also affected by the code that searches while - python - i want to store an object in an array - how to use python 2d array - is it possible to determine if today is the first weekday of the week starting on monday? : python - about for syntax i want to know the reason why the result is different depending on where the initial value is defined python - python - handling of 1d array data when using the predict function in keras - python: an error occurs that the type is different in the output using dict key values - python 3x - the program only works on the first line - i want to display screens with different window sizes when i press the [python/tkinter] tab - python - typeerror when calculating the array of images called by imageopen () - python - array elements cannot be used as function arguments ?? - i want to get an array of directory names and file names in python and create a csv - python - i want to get only the value of the first stage (after roipool) of faster r-cnn of torchvision - python - i want to store a 3d array in an array using numpy - python - i want to get the very first value of the corresponding condition in the time condition of pandas - i want to convert "array string" to an array in python - python : In the Keras CNN, the input array and the target (teacher) array does not match - python : How to create a second one on a column that will reflect the content of the first? - python : There are numbers in order, but with skipping. How to get intervals? - python : Sort positive and negative values within the variable - Trust intervals for regression model parameters in Python - python : How to find points functions - python : Randomly retrieve data from the list and make a new data list - python : ITERATOR GLOBAL FLAGS MUST BE A LIST OR TUPLE OF STRINGS ERROR - Python Numpy Sort - Scikit Learn Python If you just need to prepare a box to put the calculated result numpy.empty ()Is recommended. You should be aware of what type will be included. You can make one loop by switching the first half and the second half as follows. (Boundary handling is probably wrong in the originally written code) In addition, numpy has a handy function to do this, numpy.roll ()With functions, you don't need to loop or prepare a box in advance.
https://www.tutorialfor.com/questions-323227.htm
CC-MAIN-2021-25
refinedweb
1,539
80.31
It's a kind of training task, because nowadays these methods (I guess) don't work anymore. Win XP and MinGW compiler are used. No special compiler options are involved (just gcc with stating one source file). First of all, saving an address to exit from the program and jumping to the some Hook function: // Our system uses 4 bytes for addresses. typedef unsigned long int DWORD; // To save an address of the exit from the program. DWORD addr_ret; // An entry point. int main() { // To make a direct access to next instructions. DWORD m[1]; // Saving an address of the exit from the program. addr_ret = (DWORD) m[4]; // Replacing the exit from the program with a jump to some Hook function. m[4] = (DWORD) Hook; // Status code of the program's execution. return 0; } // Label's declaration to make a jump. jmp_buf label; void Hook() { printf ("Test\n"); // Trying to restore the stack using direct launch (without stack's preparation) of the function (we'll wee it later). longjmp(label, 1); // Just to make sure that we won't return here after jump's (from above) finish, because we are not getting stuck in the infinite loop. while(1) {} } void FixStack() { // A label to make a jump to here. setjmp(label); // A replacement of the exit from this function with an exit from the whole program. DWORD m[1]; m[2] = addr_ret; } #include <stdio.h> #include <setjmp.h> DWORD m[1]; m[2] = addr_ret; echo %ErrorLevel% Ok, I could make my program work, as it was intended, finally. Now we can launch compiled (I use MinGW in Win XP) program without any errors and with correct return code. Maybe will be helpful for someone: #include <stdio.h> #include <setjmp.h> typedef unsigned long int DWORD; DWORD addr_ret; int FixStack() { DWORD m[1]; m[2] = addr_ret; // This line is very necessary for correct running! return 0; } void Hook() { printf("Test\n"); FixStack(); } int main() { DWORD m[1]; addr_ret = (DWORD) m[4]; m[4] = (DWORD) Hook; }
https://codedump.io/share/87iEwpntPD27/1/how-to-fix-a-hook-in-a-c-program-stack39s-restoration
CC-MAIN-2016-50
refinedweb
334
75
CodePlexProject Hosting for Open Source Software Hello, I'm running version 4.6.3.0 I have followed all the advise I have found on this site regarding failing to load custom rules assemblies into StyleCop. I have also tried downloading other peoples custom rules assemblies and they do not work either. Whats happening is that the custom rule does not appear in the tree view to the left in StyleCop settings GUI. Things I've tried: - .xml file is embedded resource - the xml file has the same name as the class - I target .NET Framework 3.5 (Any cpu) - The custom rule assembly is placed into StyleCops default installation directory (same directory as the original rule assembly) /Jonas Hi Jonas, I could help you if you just shared your custom rules source code anyhow. Best regards, Oleg Shuruev Thank you Oleg, I just figured out what was the problem. The embedded XML resource had the wrong namespace (i think... at least now it works). Best regards Jonas Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://stylecop.codeplex.com/discussions/279487
CC-MAIN-2017-17
refinedweb
204
75
mapping between URL.urls.path()and/or django.urls.re_path()instances. - Django runs through each URL pattern, in order, and stops at the first one that matches the requested URL. - Once one of the URL patterns matches, Django imports and calls the given view, which is a simple Python function (or a class-based view). The view gets passed the following arguments: - An instance of HttpRequest. -. Example¶ Here’s a sample URLconf: from django.urls import path from . import views urlpatterns = [ path('articles/2003/', views.special_case_2003), path('articles/<int:year>/', views.year_archive), path('articles/<int:year>/<int:month>/', views.month_archive), path('articles/<int:year>/<int:month>/<slug:slug>/', views.article_detail), ] Notes: -. Example requests: - A request to /articles/2005/03/would match the third entry in the list. Django would call the function views.month_archive(request, year=2005, month=3). /building-a-django-site/would match the final pattern. Django would call the function views.article_detail(request, year=2003, month=3, slug="building-a-django-site"). Path converters¶ The following path converters are available by default:. Registering custom path converters¶ to_url(self, value)method, which handles converting the Python type into a string to be used in the URL. For¶. Here’s the example URLconf from earlier, rewritten using regular expressions: from django.urls import path, re_path from . import views urlpatterns = [ path('articles/2003/', views.special_case_2003), re_path(r'^articles/(?P As well as the named group syntax, e.g. (?P<year>[0-9]{4}), you can also use the shorter unnamed group, e.g. ([0-9]{4}). This usage isn’t particularly recommended as it makes it easier to accidentally introduce errors between the intended meaning of a match and the arguments of the view. In either case, using only one style within a given regex is recommended. When both styles are mixed, any unnamed groups are ignored and only named groups are passed to the view function..urls import re_path urlpatterns = [ re_path(r'^blog/(page-(\d+)/)?$', blog_articles), # bad re_path. Specifying defaults for view arguments¶ A convenient trick is to specify default parameters for your views’ arguments. Here’s an example URLconf and view: # URLconf from django.urls import path from . import views urlpatterns = [ path('blog/', views.page), path('blog/page<int. Performance¶ Each regular expression in a urlpatterns is compiled the first time it’s accessed. This makes the system blazingly fast. Syntax of the urlpatterns variable¶ urlpatterns should be a Python list of path() and/or re_path() instances. Error handling¶ When Django can’t find a match for the requested URL, or when an exception is raised, Django invokes.urls import include, path urlpatterns = [ # ... snip ... path('community/', include('aggregator.urls')), path('contact/', include('contact.urls')), # ... snip ... ] Whenever Django encounters include(), it chops off whatever part of the URL matched up to that point and sends the remaining string to the included URLconf for further processing. Another possibility is to include additional URL patterns by using a list of path() instances. For example, consider this URLconf: from django.urls import include, path from apps.main import views as main_views from credit import views as credit_views extra_patterns = [ path('reports/', credit_views.report), path('reports/<int:id>/', credit_views.report), path('charge/', credit_views.charge), ] urlpatterns = [ path('', main_views.homepage), path('help/', include('apps.help.urls')), path(.urls import path from . import views urlpatterns = [ path('<page_slug>-<page_id>/history/', views.history), path('<page_slug>-<page_id>/edit/', views.edit), path('<page_slug>-<page_id>/discuss/', views.discuss), path('<page_slug>-<page_id>/permissions/', views.permissions), ] We can improve this by stating the common path prefix only once and grouping the suffixes that differ: from django.urls import include, path from . import views urlpatterns = [ path('<page_slug>-<page_id>/', include([ path('history/', views.history), path('edit/', views.edit), path('discuss/', views.discuss), path('permissions/', views.permissions), ])), ] Captured parameters¶ An included URLconf receives any captured parameters from parent URLconfs, so the following example is valid: #/<int() and each line in the included URLconf will be passed the extra options. For example, these two URLconf sets are functionally identical: Set one: # main.py from django.urls import include, path urlpatterns = [ path('blog/', include('inner'), {'blog_id': 3}), ] # inner.py from django.urls import path from mysite import views urlpatterns = [ path('archive/', views.archive), path('about/', views.about), ] Set two: # main.py from django.urls import include, path from mysite import views urlpatterns = [ path('blog/', include('inner')), ] # inner.py from django.urls import path urlpatterns = [ path('archive/', views.archive, {'blog_id': 3}), path('about/', views.about, {'blog_id':.urls import path from . import views urlpatterns = [ #... path('articles/<int.http import HttpResponseRedirect from django.urls import reverse naming URL patterns, choose names that are unlikely to clash with other applications’ choice of names. If you call your URL pattern comment and another application does the same thing, the URL that reverse() finds depends on whichever pattern is last in your project’s urlpatterns list. Putting a prefix on your URL names, perhaps derived from the application name (such as myapp-comment instead of comment), decreases the chance of collision. You can deliberately choose the same URL name as another application if you want to override a view. For example, a common use case is to override the LoginView. Parts of Django and most third-party apps assume that this view has a URL pattern with the name login. If you have a custom login view and give its URL the name reverse() will find your custom view as long as it’s in urlpatterns after django.contrib.auth.urls is included (if that’s included at all). You may also use the same name for multiple URL patterns if they differ in their arguments. In addition to the URL name, reverse() matches the number of arguments and the names of the keyword arguments.>/',.urls import path from . import views app_name = 'polls' urlpatterns = [ path('', views.IndexView.as_view(), name='index'), path('<int:pk>/', views.DetailView.as_view(), name='detail'), ... ] from django.urls import include, path urlpatterns = [ path('polls/', include('polls.urls')), ] The URLs defined in polls.urls will have an application namespace polls. Secondly, you can include an object that contains embedded namespace data. If you include() a list of path() or re_path() instances, the URLs contained in that object will be added to the global namespace. However, you can also include() a 2-tuple containing: (<list of path()/re_path() instances>, <application namespace>) For example: from django.urls import include, path from . import views polls_patterns = ([ path('', views.IndexView.as_view(), name='index'), path('<int:pk>/', views.DetailView.as_view(), name='detail'), ], 'polls') urlpatterns = [ path(.
https://docs.djangoproject.com/en/2.0/topics/http/urls/
CC-MAIN-2020-45
refinedweb
1,076
53.78
Attached is an updated version of the LSPP requirements documentation following the recent concall. If anyone has any updates, please send them to me and I can update the document. - James -- James Morris <jmorris redhat com> ----------------------------------------------------------------------------- LSPP Requirements - Version 001 Red Hat Confidential ----------------------------------------------------------------------------- (Note: 'updated' here means 'updated to support MLS'). 1) Standard/reference MLS policy (for Fedora initially). - FM: Object labeling a challenge: for applications which need read and write access to system files. - TCS: can supply developmental policy to get the system up and running. - Server based policy. - Iterative development of ST and Policy. AAs: - TCS working policy - LSPP policy - Fedora policy 2) Updated SELinux system tools (e.g. runcon). - Probably quite a lot of work. AAs: - Detailed list of requirements, map to certification, e.g. which ones are security enforcing. 3) Updated libraries (label handling APIs, LEF glue, etc). - Dummy translator. [TCS working on one]. - PAM - NSA: Adjudication interface out of libsepol? TCS: not required but nice to have. AAs: - Need detailed list. 4) Updated OS utilities (e.g. cron). - TCS: multi-level cron, nice to have. - Can send it out. - Depends on their polyinstantiation code, but should be simple to change to namespaces. - TCS: xinetd? AAs: - Need detailed list. 5) Updated applications (e.g. MTA?). - FDP_ETC.2 would apply to MTA. - TCS: keep it simple, single level, use some other method to move it to other levels. - SSH ? TCS: can run multiple instances (also useful as general solution). - IBM: Postfix is in CAPP. - Needed for some systems management. - TCS: how to deliver messages? - IBM: open question. AAs: - Analyze & resolve MTA issues. 6) Directory polyinstantiation (via namespaces?). - TCS: may not be required in new LSPP but life difficult without it. - Definitely need & have prototype with upstream buy-in for unshare(2). - RH: will be doing anyway. AAs: - Who owns this? 7) Labeled networking (via IPsec). - TCS: need to clarify what we claim - desired trusted networking over the wire - don't support routing - support trusted apps across the network. - a lot of the machines don't talk to each other anyway. - IBM working on IPsec based solution. - A lot depends on how we define the environment. AAs: - Upstream & integrate IPsec stuff. - Develop certification strategy. 8) Polyintstantiated ports (via redirection). Still not entirely sure how important this is. - Not LSPP requirement. 9) Improved SAK support. - Not LSPP requirement. - TSOL has this. AAs: - TCS/NSA elaborate on something simple + useful. 10) Labeled printing. - Explicitly required by LSPP. - Running headers and footers. - Postscript can be problematic (forbid ps, modify ps driver, modify ps etc.) - Needs work. - RH not experts. AAs: - Who owns this? 11) Device allocation support. - TCS: have a device allocation command framework, can be sent out. - Allocation command changes DAC & MAC label on device. - Audit shows allocation & deallocation, e.g. for a floppy. - Can't happen automatically, e.g. no automount of CD, must be manual. - Need to define what is considered removable devices. - PAM? AAs: - TCS can elaborate requirements/design. - Trace LSPP requirements. 12) Network filesystem support: not needed for LSPP but SMB probably useful, less complicated than NFS. - Not needed for LSPP. - TCS may use ssh or scp for file transfer. 13) More user customizable object labeling support, e.g. for network interfaces. - TCS: may not be needed? - NSA: may want to be able to customize network labels without tweaking policy. Not sure if work is happening. AAs: - Need to determine requirements for LSPP/procurement. 14) Updated audit support. AAs: - Needs detailed elaboration. 15) Better revocation (e.g. for mmap'd files). - FM: good idea but difficult and not needed for certification, where we can probably assume static policy and relabeling only by trusted applications. - IBM: immediate termination of user's session when account is revoked. (FMT_REV.1) - May be needed for time-based roles (e.g. role valid for 90 days, SE rules modified & policy reload). - Not an LSPP issue. AAs: - Determine what is needed for procurement. 16) Extendsion of RBAC support has been discussed. - IBM: wants competitive certifcation against RBACPP. - TCS: differentiate between sysadmin and secadmin. - IBM: self-test utility required, amtu. AAs: - What is the scope of this? Do we aim for LSPP first and do this as a further phase, or do both at the same time? 17) TCS: May need explicit labeling of pseudo filesystems. AAs: - Trace to LSPP requirements. 18) IBM: SELinux and MLS testing (test case development) at the EAL4 level - Quite a lot of work - Joy Latten has been working on functional coverage of SELinux. 19) IBM: Usability of the final solution - More info needed. 20) IBM Evidence creation (HLD, LLD, FSP, Correspondence, VA, admin & user guides, test plan) - A lot of documentation needed. - must include new selinux commands etc. -----------------------------------------------------------------------------
https://www.redhat.com/archives/redhat-lspp/2005-May/msg00055.html
CC-MAIN-2018-09
refinedweb
781
63.56
Measuring Application Performance & Business Metrics using StatsD Note:This post was written by Michael Forhan, one of our friends at AppFirst. StatsD is a wonderful tool created by the developers at Etsy for keeping track of application metrics with minimal overhead. To stay small, StatsD was written to deliver metrics over UDP to a Graphite server for processing and visualization. At AppFirst we realized that the power of StatsD was enormous: what could be better than seeing application performance, resource bottlenecks and real-time business metrics. The AppFirst collector already picks up system, process, log and nagios-plugin data: extending application awareness with StatsD was only natural. The developers at AppFirst took StatsD one step further. While we work with Etsy's deployment of StatsD, we streamlined application metrics by: - Doing away with maintaining your own graphite server - Reducing network overhead - Ensuring delivery of content using TCP - Providing assistance for using StatsD in your application with a support site, blogs and videos. If you are just learning about StatsD, this post should help you understand the basics of StatsD, its ease of use and the power you can add to your application so you can create high performance applications. StatsD implements only a few types of metrics: counters, gauges and timers. These metric types should cover the basis of anything you want to measure in your application. Try AppFirst free on Engine Yard and get full-stack visibility into your systems, apps and metrics. Namespaces Namespaces, also known as buckets, are where your metrics are collected, aggregated and displayed. Namespaces are dot delineated, allowing you to build a natural grouping for searches, correlations and dashboard display. Namespaces are completely user defined but I recommend the format <Application>.<Server>.<Function> as a starter. The reason I utilize this format is with Cloud instances your <Server> field can change quickly and your application name is an easily recognizable way to group metrics. For example, EcommerceApp.AWS-12542383.Cart.Add. The other nice part about this format is if you decide to build your own tools using our REST API, you can group your Cart metrics together so you can have total transactions, ratios of add to buy or add to delete. Counters Designed to measure the number of events in a minute, a counter can be incremented or decremented in code. Counters are great for measuring items like number of incoming connections, number of resource requests, or even the frequency of a method or function call. The call to push a StatsD metric is easy. AppFirst has developed collectors for quite a few languages, but in this example I'll use Ruby: require ‘afstatsd' # instantiate a metric connector statsd = Statsd.new # define our namespace statsd.namespace = “myApplication.#{Socket.gethostname}” # Your Code statsd.increment "ProductRequest.#{product.sub(/\s+/,"")}" This example generates a metric named MyApplication.<SERVER>.ProductRequest.<ProductName> and increments the count by one each time it is executed. In a shopping cart method, you could see product velocity (your product managers and marketing team will love you for this) . Using StatsD you can increment or decrement counters - giving you flexibility for displaying information on a per minute basis. Gauges Counters are great for minute by minute instrumentation, but if you are looking to monitor values that shouldn't change often, a gauge is more appropriate. Gauges are designed for metrics of volume: memory pools, process/thread counts or how many database connections are open. The idea is that on a minute by minute basis, the values either don't change much or your StatsD metric isn't called frequently. If you are using a polled data script, or even a cron job to monitor the status of a service, then a gauge is a great choice. For our example, lets talk about an iptables script. This script runs every so often to check your log files for failed login attempts. If the script locates a failed login with multiple attempts it creates a temporary iptables rule and writes the information to a file. The temporary rules have an exponential blocking period: 5 minutes for the first event, 25 minutes for the second event, 2 hours and 5 minutes for the third event, and so on upto 1 month. The information you want to keep track of is the number of iptables rules in effect. With normal operations this number should be consistent. With StatsD you could add the following line to your bash script: # Code from Etsy statsd-client.sh example @ github # Setup UDP Socket exec 3<> /dev/udp/${STATSD-HOST:-127.0.0.1}/${STATSD_PORT:-8125} # Send data over Socket printf “iptables.totalrules:$numberOfRules|g” >&3 # Close Socket exec 3< &- exec 3>&- With this script addition, if you have a variable numberOfRules, you can push that value to StatsD and have it collected on the AppFirst big data store for aggregation and correlation. Timers The last item I'll discuss is timers. Timers are easy enough to understand: they report a time value so you can adequately measure where performance bottlenecks may be in your application. Utilizing timers to find bottlenecks, you can determine where to spend your time to refactor code. The easiest way to measure this metric is to call your function directly to get the timing. A ruby example: require ‘afstatsd' # instantiate a metric connector statsd = Statsd.new # define our namespace statsd.namespace = “myApplication.#{Socket.gethostname}” # Your Code statsd.time ‘Cart.ProcessPayment' { cart_processing } This script will execute your cart_processing method and store the execution time in a StatsD metric named MyApplication.<SERVER>.Cart.ProcessPayment. These are simple examples that go to show that information collected by StatsD is nearly limitless. With StatsD libraries available for a large number of languages, you can easily utilize StatsD metrics in your application and then visualize that information on the AppFirst dashboard. Try AppFirst free on Engine Yard and get full-stack visibility into your systems, apps and metrics. Share your thoughts with @engineyard on Twitter
https://blog.engineyard.com/2013/measuring-application-performance-business-metrics-using-statsd
CC-MAIN-2016-07
refinedweb
994
53.81
Vuex is a double-edged sword. If used properly, it can make your life a lot easier when working with Vue. It can also make a mess of your codebase if you’re not careful. There are four main concepts you shouldunderstand before you use Vuex: state, getters, mutations, and actions. A simple Vuex state manipulates data within these concepts in the store. Mapping in Vuex provides a nice, clean way of retrieving data from them. In this tutorial, we’ll demonstrate how to map data from the Vuex store. If you’re a Vue developer who’s familiar with the basics of Vuex, these best practices will help you write cleaner and more maintainable code. This tutorial is targeted an average Vue.js developer who knows the basics of Vue.js and Vuex. who is on a quest to write cleaner and maintainable code using best practices. What is mapping in Vuex? Mapping in Vuex enables you to bind any of the state’s properties (getters, mutations, actions, state) to a computed property in a component and use data directly from the state. Below is an example of a simple Vuex store with test data in state. import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) const store = new Vuex.Store({ state: { data: "test data" } } }) If you want to access the value of data from the State, you can do the following in your Vue.js component. computed: { getData(){ return this.$store.state.data } } The above code works but quickly gets ugly as data in the state starts to grow. For example: import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) const store = new Vuex.Store({ state: { user: { id:1, age:23, role:user data:{ name:"user name", address:"user address" } }, services: {}, medical_requests: {}, appointments: {}, } } }) To get the username from the user object in state: computed: { getUserName(){ return this.$store.state.user.data.name } } This will get the job done, but there is a better way. Mapping the state To map the state to a computed property in the Vue.js component, run the following. import { mapGetters } from 'vuex'; export default{ computed: { ...mapState([ 'user', ]) } } You now have access to the entire user object within the component. You can do even more, such as add objects from the state to the mapState method. import { mapGetters } from 'vuex'; export default{ computed: { ...mapState([ 'user', 'services' ]) } } As you can see, this is a lot cleaner. You can easily access the username with: {{user.data.name}} The same goes for the services object and many other values mapped. Did you notice how we passed in an array to the mapState()? If you need to give the value a different name, you can pass in an object instead. import { mapGetters } from 'vuex'; export default{ computed: { ...mapState({ userDetails:'user', userServices:'services' }) } } Now you can reference the user by simply calling userDetails. When to map the entire state As a rule of thumb, you should map only when you have lots of data in the state and need them all in the component. In the example above, it would not make much sense to map the entire user object if all we need from it is just one value — the username, for example. When you map, the entire object is loaded into memory. You don’t want to keep loading data to memory that you don’t because it would be redundant and lead to performance implications in the long run. Mapping getters I’ll assume you know the basics of getters in Vuex and proceed to mapping getters. Check out the Vuex documentation for more information. Mapping getters is similar in syntax to the mapState function. import { mapGetters } from 'vuex' export default { computed: { ...mapGetters([ 'firstCount', 'anotherGetter', ]) } } Similar to the mapped state, you can pass in an object to the mapGetters function if you intend to use a different name. import { mapGetters } from 'vuex' export default { computed: { ...mapGetters([ first:'firstCount', another:'anotherGetter', ]) } } Mapping mutations Unlike mapState and mapGetters, which are mapped in the computed property of Vue to make data on state reactive with data in the component, Mutations are mapped in the method. You can commit your mutation using the following syntax in Vue when you map your mutations. this.$store.commit('mutationName`) For example: import { mapMutations } from 'vuex' export default { methods: { ...mapMutations([ 'search', // map `this.increment()` to `this.$store.commit('search')` // `mapMutations` also supports payloads: 'searchBy' // map `this.incrementBy(amount)` to `this.$store.commit('searchBy', amount)` ]), ...mapMutations({ find: 'search' // map `this.add()` to `this.$store.commit('search')` }) } } Mapping actions Mapping actions is a lot like mapping mutations because it is also done in the method. Using a mapper binds the this.$store.dispatch('actionName') to the name in the array of the mapper or the key of the object.')` }) } } Conclusion By now, you should: - Have a firm understanding of how mapping in Vuex works and why you should use it - Be able to map all the components in the Vuex store (state, getters, mutations, actions) - Know when to map the store and when not to These best practices will help you immensely if you decide to use Vuex in your next project. “A complete guide to mapping in Vuex” Good guide :)) Thank you so much, this is just the beginning of learning VUE of the rest, but this entry is useful.
http://blog.logrocket.com/a-complete-guide-to-mapping-in-vuex/
CC-MAIN-2020-40
refinedweb
889
66.13
Abstract: In the current upsurge in Web2.0, AJAX are attracting worldwide attention, becoming the most talked-about technical terms. AJAX technology has improved very much WEB application user experience, jQuery simplifies the creation and use of the AJAX development. This article first AJAX technology with the traditional Web development were compared to explain the advantages of AJAX technology asynchronous interaction, followed by an example of jQuery in specific applications embody the characteristics of, and finally, concluding remarks and discusses the prospects. Keywords: jQuery; AJAX; asynchronous interaction; XMLHttpRequest CLC number: TP312 1. Introduction At present, the network is in a Web2.0 [1] age, appeared at this time a large number of related technologies, Ajax is one of them representative. AJAX is attracting worldwide attention, becoming the most talked-about technical terms, a large number of scholars began to study or are studying AJAX technology. If developers use a simple Javascript Ajax approach to development, it would make very difficult to debug the application, thus reducing the efficiency of production, and a lot of Javascript code that makes the process complex, the relative increase in processing time, affect the user experience. However, Ajax technology has hastened the development of a large number of technology based on its own JavaScript libraries. The use of JavaScript libraries help to minimize use of JavaScript and Ajax to bring many of the frequently asked questions. jQuery [3] is one of the very best of a JavaScript library simple and fast. Many scholars have termed the Ajax framework, but I believe that it has not reached the level of the framework. jQuery encapsulates the B / S development process of a large number of technical details, so developers can focus more on business logic development action and user interface development process. One example of this realization, through case analysis, embodied in the specific application characteristics of jQuery. 2. AJAX Introduction What is 2.1 AJAX AJAX (Asynchronous JavaScript And XML) translated into Chinese, it means asynchronous JavaScript and XML technology. In fact, AJAX is not a single technology or a computer language, it is an integrated multi-molding technology, which includes the following five aspects: (1) the use of XHTML and CSS standards-based presentation technology. (2) the use of DOM for dynamic display and interaction. (3) the use of XML and XSLT for data exchange and processing. (4) Asynchronous data retrieval using XMLHttpRequest. (5) the use of JavaScript to the above techniques together. I believe that Ajax technology is the core JavaScript object XMLHttpRequest [3], the object is an asynchronous technology to send the request, AJAX technology using asynchronous HTTP request, in the Browser and Web server to transfer data between, so that only updates part of the Browser page content without re-loading the new page. Asynchronous interaction, and not transferred to the new page view is Ajax greatly improve the user experience areas. WEB 2.2 AJAX effects and the development of the traditional advantages of a traditional Web application allows users to fill out the form, when the form is submitted when a request is sent to the Web server. Server to receive and process the form came, and then returns a new page. This approach to waste a lot of bandwidth, because the majority of around two pages in HTML code are the same. Since each application needs to interact with the request to the server, application response time depends on the server's response time. This has resulted in response to the user interface is much slower than native applications. Ajax application is different, Ajax applications can only send to the server and retrieve the necessary data, and handle the client using JavaScript from the server response. Because the server and browser exchange of data between the significant reduction, the results we can see a more responsive application. Meanwhile, many of the processing of a request can be done on the client machine, so Web servers also reduces the processing time. Figure 2-1 interact traditional WEB Figure 2-2AJAX technology interactively comparing Figure 2-1 and Figure 2-2 shows, Ajax engine on behalf of the user to communicate with the server and update the user interface seen. The interaction is conducted asynchronously in the background and does not interrupt the user the current operation. In addition, because only with the server-side Ajax level data exchange, and some page display, calibration data and other functions will be given to Ajax engine for themselves. Ajax will be the work of some server-side transfer to the client, the client full use of idle capacity so as to reduce the burden on the server, to speed up the browser's response time, reduce the user waiting time [4]. 3. JQuery Introduction What is 3.1 jQuery jQuery John Resig created by the early 2006, for any programmer to use JavaScript code, it is a very useful JavaScript libraries. Whether new to JavaScript language and would like to get a document object model to resolve (Document Object Model, DOM) scripting and Ajax development issues in the library complex, or as a DOM scripting and Ajax are tired of boring repetition of development work Senior JavaScript expert, jQuery will be the first choice. It can operate in a simple Web page documents, handle events, run the animation or add Ajax interactions [5]. 3.2 jQuery can do jQuery's purpose is to WRITE LESS, DO MORE, write less code, do more. WEB jQuery library to provide a common scripting abstraction layer, making it suitable for almost all situations scripting, speaking only in respect of its core features, jQuery can satisfy my needs: (1) get the page elements. (2) modify the appearance of the page. (3) change the page content. (4) respond to user's page operation. (5) for the page to add dynamic effects. (7) No need to refresh the data can be obtained from the server. (8) JavaScript simplify common tasks. These strategies not only ensure that the jQuery Package miniaturization - after compression is about 20KB, the same time, we use this library for custom code to ensure us to provide the technical support [6]. Here is a simple example, it illustrates the impact of jQuery code. To perform some really simple and common tasks, for example, an area for the page attached to each link a click (click) event, you can use pure JavaScript code and DOM scripting to implement the code as follows: var external_links = document.getElementById ('external_links'); var links = external_links.getElementsByTagName ('a'); for (var i = 0; i <links.length; i + +) ( var link = links.item (i); link.onclick = function () ( return confirm ('You are going to visit:' + this.href); ); ) The following code shows the use of jQuery to achieve the same functionality. $ ('# External_links a'). Click (function () ( return confirm ('You are going to visit:' + this.href); )); Use jQuery, you can grasp the crux of the matter, only to realize we want the function code, while eliminating the cumbersome process. Cycle of the elements do not, click () function will complete these operations. Similarly, the script does not need to call more than DOM. We only need to use a short string to define the required elements can be.. # External_links used to retrieve the id for the external_links elements. JQuery after a space that needs to retrieve all the elements external_links <a> elements. $ () Function returns a CSS selector that contains all the elements of the jQuery object matching. jQuery object similar to the array, but it comes with a large number of special jQuery functions. This example is just a brief overview of jQuery's operations, more detailed operation on the jQuery functions jQuery can see the API, the jQuery jQuery official website and various communities can be downloaded by jQuery's API. jQuery's applicability on the one hand due to its design, on the other hand benefit from the emergence of open source projects around the active role of the community. jQuery engineers for engineers in addition to providing a flexible and stable outside the system, jQuery's final product is free for everybody. 4. JQuery implementation of the AJAX During the development process using the jQuery 4.1 jQuery's AJAX-based development, we must be prepared to JavaScript development environment and jquery.js the library. The examples use the development environment is IntelliJ [7] IDEA 7.0.3, use IntelliJ because Intellij IDEA is a comprehensive Java programming environment, it can be said to support JavaScript, the best of all IDEA, its strong Accessibility write JavaScript code on the ability of weak or very fear of JavaScript code, but who is the best choice. Also, jQuery v.1.3.2 release the latest version is available from jQuery's official website () to download the js file to get a jquery.js we develop in this paper is to use the js file . During development so long as this jquery.js in a public place, in the corresponding page into the can. This example is a simple check whether there are examples of user name, in this case the jQuery application includes four parts: Html documents, for the acts of the page to add JavaScript file, another is the web.xml configuration file and use the Java language simple scenes, the background operation is the most basic background operation and does not include a complex approach, we focus our effort on the implementation of jQuery, but we can back-needed expansion of operations. Code section contains information about comments, these comments very good explanation of the information is almost end of each code operation role. First, open IntelliJ, create a new Project (IntelliJ in the Project equal to Eclips in WorkSpace, the work space), followed by a new Module, is a project, then in the web directory under the Module to create a new Html document, name jquery.html, the code is as follows: <! DOCTYPE HTML PUBLIC "- / / W3C / / DTD HTML 4.01 Transitional / / EN" ""> <html> <head> <meta http- <title> verify the existence of the user name </ title> <! - Here should pay attention to the introduction of jQuery library file <script> label must be placed into the custom script file before <script> tag -> <script type="text/javascript" src="jslib/jquery.js"> </ script> <script type="text/javascript" src="jslib/verify.js"> </ script> </ Head> <body> Check the jQuery instance of the user name, please enter your user name: <br/> <! - Ajax mode data does not need to use form to submit, so do not write the form tag -> <! - Ajax way without name attribute, need a id attribute -> <input type="text" /> <input type="button" value=" Validation "> <! - Div returned for storage of information server, start the air, div is the CSS in the block-level element -> <! - Id attribute definition is to find the node for operation -> <! - This div is to receive and display the data returned by the server side -> <div> </ div> </ Body> </ Html> Then in the Module directory under the web folder create a new jslib this jslib js folder for storing files, create a new directory this jslib jquery.js, open the empty js file to download in advance good jquery. js to open, copy the code inside the jslib directory jquery.js file, this is mainly due to IntelliJ does not support direct copy the following files in the project. And then create a new directory in the jslib verify.js, verify.js code as follows: function verify () ( var url = "JqueryServer? name =" + $ ("# userName"). val (); url = converURL (url); $. Get (url, null, function (data) ( $ ("# Result"). Html (data); )); ) / /; ) Then in the src directory under the Module to create a new JqueryServer.java file, this file is a simple background operation. Code is as follows:LEncoder; import java.net.URLDecoder; public class JqueryServer extends HttpServlet ( protected void doGet (HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) throws ServletException, IOException ( try ( httpServletResponse.setContentType ("text / html; charset = utf-8"); PrintWriter out = httpServletResponse.getWriter (); / / Here to join the operation code Session is mainly to explain how to resolve browser cache problems / / Use of time stamps in verify.js, fool the browser, does not read the cache. Integer inte = (Integer) httpServletRequest.getSession (). GetAttribute ("total"); int temp = 0; if (inte == null) ( temp = 1; httpServletRequest.getSession (). setAttribute ("total", temp); ) Else ( temp = inte.intValue () +1; httpServletRequest.getSession (). setAttribute ("total", temp); ) / / 1. Get the page parameter information are sent by client String old = httpServletRequest.getParameter ("name"); / / 2. Check whether the parameters of transmission over the issue if (old == null | | old.length () == 0) ( out.println ("User name can not be null!"); ) Else ( / / 3. To verify operation / / Here can be extended depending on the circumstances, such as the preparation of complex business layer, from the database / / read data. String name = old; if (name.equals ("qq")) ( / / 4. And the difference between the traditional way: the user data of interest (data) to return to, but / / not re-turned to a new view layer. But as with the traditional formulation, but not the same in real terms out.println ("User name" + name + "already exists, another user name" + temp); ) Else ( out.println ("User Name [" + name + "] does not exist, you can register!" + temp); ) ) ) Catch (Exception e) ( e.printStackTrace (); ) ) protected void doPost (HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) throws ServletException, IOException ( doGet (httpServletRequest, httpServletResponse); ) ) Just after the preparation of the necessary configuration Servlet, in the WEB-INF directory of the web.xml configuration file to configure, the code is as follows: <? Xml version = "1.0" encoding = "UTF-8"?> <Web-app xmlns = "" xmlns: xsi = "" xsi: <servlet> <servlet-name> JqueryServer </ servlet-name> <servlet-class> JqueryServer </ servlet-class> </ Servlet> <servlet-mapping> <servlet-name> JqueryServer </ servlet-name> <url-pattern> / JqueryServer </ url-pattern> </ Servlet-mapping> </ Web-app> These are complete, you can put this simple application deployment on the server, and in this instance the server is Tomcat6.0, in the browser address bar can. Chinese garbled jQuery 4.2 solution and browser cache problem over the instance of a defect in Chinese garbage problem, then introduce the two programs to solve the Chinese garbled. First: the data is issued in verify.js do a encodeURI operation, code changes part: var url = "AJAXServer? name =" + encodeURI ($("# userName "). val ()); Also on the server side JqueryServer.java also to make changes, that is, after obtaining the data, adding such a code String name = new String (old.getBytes ("iso8859-1"), "UTF-8"); then the following of a code: String name = old; delete. Second: the data sent in verify.js done in two encodeURI operation, code changes part: var url = "AJAXServer? name =" + encodeURI (encodeURI ($("# userName "). val ())); Also on the server side JqueryServer.java also to make changes, that is, after obtaining the data, adding such a code String name = URLDecoder.decode (old, "UTF-8"); then following a code: String name = old ; delete. In both methods the author recommends using the second method. In addition, JqueryServer.java this file to do the deal by adding the corresponding session aims to introduce the browser cache problem, the code is as follows: Integer inte = (Integer) httpServletRequest.getSession (). GetAttribute ("total"); int temp = 0; if (inte == null) ( temp = 1; httpServletRequest.getSession (). setAttribute ("total", temp); ) Else ( temp = inte.intValue () +1; httpServletRequest.getSession (). setAttribute ("total", temp); ) In verify.js document, use the time stamp fool the browser cache does not read the main approach is to use the timestamp to change URL addresses, the code is as follows: / /; ) Call $. Get () before, simply use converURL () function to convert the URL with the timestamp can be. Code: url = converURL (url); In this way, we solved the jQuery Chinese garbled and browser cache problem. While this is a simple use case, but one solution jQuery Chinese garbled and browser cache problem in the practical application of skills among the good. 5. Conclusions In this paper, realization of the jQuery AJAX applications studied, briefly introduced the AJAX, the AJAX framework for analysis and research technology and its best to resolve the substantive issues. Can be concluded, Ajax framework through asynchronous data request, the fine effect of UI is changing people's views of Web applications [8]. The application of AJAX technology and traditional WEB a comparative study shows that the AJAX applications with asynchronous interaction and increase the user experience and so on. JQuery AJAX applications for development in the role of the analysis, and to achieve a complete example of this instance of a detailed analysis, by this example, use time stamps to resolve browser cache problems and offered two ways to solve jQuery The Chinese disorderly, these two problems the solution of actual application development has a good guide. Ajax technology in J2EE Web applications in research and development of J2EE will become the mainstream, other, jQuery has been recognized as ASP.NET MVC and Visul Studio in future versions of the formal components of the mobile phone platform Nokia Web Run-Time is also will be included, such a development environment, jQuery will certainly have an even broader space for development. References [1] Tim O'Reilly. Web2.0., 2005-11-22. [2] ,2009-11-19. [3] Anne van Kesteren. W3C Working Draft. ,2009-11-19. [4] JESSE JAMES GARRETT. Ajax: A new approach to Web applications [EB / OL].. [5] Jesse Skinner. Simplify Ajax development using jQuery.. [6] Jonathan Chaffer, Karl Swedbeger. jQuery Essentials [M]. Willington Lee Li Wei. Beijing: People's Education Press, 2008. [7]. [8] Dong-Hua Zhang. Ajax framework in J2EE architecture and application of [D]. Qingdao: China Ocean University, 2008. The Research of AJAX application based on jQuery Abstract In the current boom of the Web2.0, AJAX is attracting the world's eyes, becoming the most talked-about technical terms by people of the word. AJAX has improved experience of the users in WEB application. The creation and use of jQuery has simplified the development of AJAX greatly. In this paper, first, the author have a compare and analysis between the AJAX and traditional development of the Web, and expound advantage of the asynchronous, second, show the characteristics of jQuery in the specific applications through an example , at last, have a summary on the paper, then look into the future about AJAX and jQuery. Keywords: jQuery, AJAX, Asynchronous, XMLHttpRequest
http://www.codeweblog.com/jquery-s-ajax-based-applications/
CC-MAIN-2017-09
refinedweb
2,995
55.64
We’re on the home stretch for this mini-project! It’s finally time to modify the Game Controller so that input is dependent on the players joined in our match, and to make sure that moves made by one player are seen by the other. Game Controller A finite state machine can be a very nice way to chop the logic of a game into bite-sized bits. Even for a game as simple as Tic Tac Toe, we can easily identify a variety of important states: - Loading – while waiting for both players to join the match - Active – while it is the local player’s turn - Passive – while it is the remote player’s turn - End – when the game has ended due to a victory or stalemate Open the GameController script for editing. Modify the class definition so that it inherits from StateMachine instead of MonoBehaviour: public class GameController : StateMachine I frequently cache important references in my game, including the game elements, UI, other controllers, etc. for convenience. This way I am not required to either find them or have singletons all over. To finish our little project, we need to provide links to our three UI labels (note that this requires you to add the UnityEngine.UI namespace), and to the Match Controller: public Text localPlayerLabel; public Text remotePlayerLabel; public Text gameStateLabel; public MatchController matchController; Go ahead and connect the references to these in the editor. While we are there, we can go ahead and manually connect the board reference as well. Remove the implementation of the Start method. Then add this Awake method in its place. The CheckState method will cause the GameController to enter its Loading phase, but we will get to that in a bit. void Awake () { CheckState(); } We’re also going listen to all the rest of the notifications we added. There were a few we didn’t use from the Game class, and now we can also observe the PlayerController and MatchController notifications as well. You can remove the Board click notification though, because I will want each state to be able to respond differently. Here are the full OnEnable and OnDisable methods: void OnEnable () { this.AddObserver(OnMatchReady, MatchController.MatchReady); this.AddObserver(OnDidBeginGame, Game.DidBeginGameNotification); this.AddObserver(OnDidMarkSquare, Game.DidMarkSquareNotification); this.AddObserver(OnDidChangeControl, Game.DidChangeControlNotification); this.AddObserver(OnDidEndGame, Game.DidEndGameNotification); this.AddObserver(OnCoinToss, PlayerController.CoinToss); this.AddObserver(OnRequestMarkSquare, PlayerController.RequestMarkSquare); } void OnDisable () { this.RemoveObserver(OnMatchReady, MatchController.MatchReady); this.RemoveObserver(OnDidBeginGame, Game.DidBeginGameNotification); this.RemoveObserver(OnDidMarkSquare, Game.DidMarkSquareNotification); this.RemoveObserver(OnDidChangeControl, Game.DidChangeControlNotification); this.RemoveObserver(OnDidEndGame, Game.DidEndGameNotification); this.RemoveObserver(OnCoinToss, PlayerController.CoinToss); this.RemoveObserver(OnRequestMarkSquare, PlayerController.RequestMarkSquare); } Let’s take a look at the handler methods for each: void OnMatchReady (object sender, object args) { if (matchController.clientPlayer.isLocalPlayer) matchController.clientPlayer.CmdCoinToss(); } When a player joins a match, it registers with the host (server) before finalizing itself on the client which created it. So to make sure that the match was fully configured on both instances of the game before proceeding, I waited for the “Ready” notification where the “client” player was also the “local” player. void OnCoinToss (object sender, object args) { bool coinToss = (bool)args; matchController.hostPlayer.mark = coinToss ? TicTacToe.Mark.X : TicTacToe.Mark.O; matchController.clientPlayer.mark = coinToss ? TicTacToe.Mark.O : TicTacToe.Mark.X; game.Reset(); } The coin flip is used to decide who goes first. In my implementation the ‘X’ mark is also tied to the player which goes first, much like the white pieces always move first in Chess. It is important to make sure that both instances are synched in their knowledge of which player should be in control. It was convenient to think of the players in terms of “host” and “client” here because those identifiers remain consistent regardless of the game instance they run on, but the “remote” and “local” identifiers are subject to each instance’s perspective. void OnRequestMarkSquare (object sender, object args) { game.Place((int)args); } Here we have listened to a notification which should be posted on each game instance thanks to an ClientRpc method call. This makes sure that the move is attempted on each game. void OnDidBeginGame (object sender, object args) { board.Clear(); CheckState (); } We had this handler already, but I added another call to the CheckState method. This will cause the game to either go into an Active or Passive state, depending on whether it is the local player’s turn. void OnDidChangeControl (object sender, object args) { CheckState (); } void OnDidEndGame (object sender, object args) { CheckState (); } We will also call the CheckState method at other important times such as each time that the game changes control to a different player, or when a game ends. void CheckState () { if (!matchController.IsReady) ChangeState<LoadGameState>(); else if (game.control == TicTacToe.Mark.None) ChangeState<EndGameState>(); else if (game.control == matchController.localPlayer.mark) ChangeState<ActiveGameState>(); else ChangeState<PassiveGameState>(); } Finally we have the implementation which determines what state we are in. The first and most important check is to make sure that we have both players in the match – until that point we should stay in the “Load” state. Next we want to see if the game is over, and if so, trigger the “EndGame” state. Otherwise, when a game is still in play we will toggle between the “Active” and “Passive” states depending on whether or not it is the local player’s turn. Base Game State Create a new sub-folder in the Scripts/Controller named Game Controller States. Then, create a new script in that folder called BaseGameState. Replace the template code with the following: using UnityEngine; using UnityEngine.UI; using System.Collections; using System.Collections.Generic; using TicTacToe; public abstract class BaseGameState : State { public GameController owner; public Board Board { get { return owner.board; }} public Text LocalPlayerLabel { get { return owner.localPlayerLabel; }} public Text RemotePlayerLabel { get { return owner.remotePlayerLabel; }} public Text GameStateLabel { get { return owner.gameStateLabel; }} public Game Game { get { return owner.game; }} public PlayerController LocalPlayer { get { return owner.matchController.localPlayer; }} public PlayerController RemotePlayer { get { return owner.matchController.remotePlayer; }} protected virtual void Awake () { owner = GetComponent<GameController>(); } protected void RefreshPlayerLabels () { LocalPlayerLabel.text = string.Format("You: {0}\nWins: {1}", LocalPlayer.mark, LocalPlayer.score); RemotePlayerLabel.text = string.Format("Opponent: {0}\nWins: {1}", RemotePlayer.mark, RemotePlayer.score); } } It is a simple abstract base class from which we will derive the four states I mentioned earlier. I want each state to cache a reference to its owner, so I handle that in the Awake method, and I also want to wrap a bunch of the properties found on the owner so I don’t have to fully specify the references. Those convenience properties weren’t necessary, but I feel like they make the other code more readable. I also added a method to update the player labels since several of the subclassed states will use the same formatting. Load Game State Create another script named LoadGameState in the same folder as the last one. Replace the default code with the following: using UnityEngine; using System.Collections; public class LoadGameState : BaseGameState { public override void Enter () { base.Enter (); GameStateLabel.text = "Waiting For Players"; LocalPlayerLabel.text = ""; RemotePlayerLabel.text = ""; } public override void Exit () { base.Exit (); LocalPlayer.score = 0; RemotePlayer.score = 0; RefreshPlayerLabels(); } } When this state enters, I update the GameStateLabel with an informative message so that users will realize why they are waiting. I clear the player labels since they wont have been assigned a mark yet. When the state exits, it will be because the players have been assigned their marks and the game has begun. I can go ahead and reset their scores and call the RefreshPlayerLabels method in the base class to update the player labels. Active Game State Create another script named ActiveGameState in the same folder as the last one. Replace the default code with the following: using UnityEngine; using System.Collections; public class ActiveGameState : BaseGameState { public override void Enter () { base.Enter (); GameStateLabel.text = "Your Turn!"; RefreshPlayerLabels(); } protected override void AddListeners () { base.AddListeners (); this.AddObserver(OnBoardSquareClicked, Board.SquareClickedNotification); } protected override void RemoveListeners () { base.RemoveListeners (); this.RemoveObserver(OnBoardSquareClicked, Board.SquareClickedNotification); } void OnBoardSquareClicked (object sender, object args) { LocalPlayer.CmdMarkSquare((int)args); } } Like before I provide a handy message when this state enters so that a user will realize it is time for them to make a move. We use the Add and Remove Listeners methods defined in the base State class in order to listen to the board square clicked notification. The handler method triggers an attempted move for the game. Note that we wont have to worry about the player entering moves on the opponents turn because this handler method will be unregistered when the “Active” state exits. Passive Game State Create another script named PassiveGameState in the same folder as the last one. Replace the default code with the following: using UnityEngine; using System.Collections; public class PassiveGameState : BaseGameState { public override void Enter () { base.Enter (); GameStateLabel.text = "Opponent's Turn!"; RefreshPlayerLabels(); } } All we need to do here is update the labels so the player knows it isn’t their turn yet. I could register to listen for the board clicked events and use that opportunity to play some sort of sound effect to reinforce that its not time that player’s turn. Doing nothing is fine for now. End Game State Create another script named EndGameState in the same folder as the last one. Replace the default code with the following: using UnityEngine; using System.Collections; using TicTacToe; public class EndGameState : BaseGameState { public override void Enter () { base.Enter (); if (Game.winner == Mark.None) { GameStateLabel.text = "Tie Game!"; } else if (Game.winner == LocalPlayer.mark) { GameStateLabel.text = "You Win!"; LocalPlayer.score++; } else { GameStateLabel.text = "You Lose!"; RemotePlayer.score++; } RefreshPlayerLabels(); if (!LocalPlayer.isServer) StartCoroutine(Restart()); } IEnumerator Restart () { yield return new WaitForSeconds(5); LocalPlayer.CmdCoinToss(); } } This “Enter” method looks a little more complicated, but really I am simply showing a message relevant to the result of the game. Also, I increment the score of whichever player won. I chose a player which would be unique among the two game instances to serve as a marker so that I would only trigger a new game one time. Only the host game instance will run the “Restart” method. Summary In this lesson we made great use of a Finite State Machine to help group the abstract concept of a game state into logical chunks. These states help simplify what would otherwise probably have been implemented as relatively complicated branches of “if” statements and organize them into easily readable code. In other words, we showed things like how to only allow game input when it is your turn, and how to make sure that help labels are showing something appropriate. We have completed a networked multiplayer turn based game, but it’s only the first step on a long journey. There is a lot more to learn, such as implementing your own HUD to drive the Network Manager. Don’t forget about managing other events such as what happens if a player gets disconnected from a match, is it possible to rejoin? I am hoping to see more examples and tutorials by Unity on these types of topics, but if anyone finds other tutorials please share. Also, as I mentioned before, I have very little experience with Networking and no experience with Unity’s Networking outside of this first attempt, so if anyone has any suggestions or critiques I would love to hear them! Don’t forget that if you get stuck on something, you can always check the repository for a working version here. 20 thoughts on “Turn Based Multiplayer – Part 5” Can you give me the gitHub limk for this game? Sure thing: After I completed the tutorial, I can no longer click the board to add a piece of the game. Any possibility you know why off-hand? Also this: NullReferenceException: Object reference not set to an instance of an object GameController.CheckState () (at Assets/Scripts/Controller/GameController.cs:93) GameController.Awake () (at Assets/Scripts/Controller/GameController.cs:20) The error shows that there is a null reference found on line 93 of the Game Controller script. Did you remember to connect the reference to the “Match Controller” in the inspector? The game only accepts input when it is that player’s turn. Did you try making a move on the other game instance? Hello! I’ve completed this project following every step and I got one question for you. How can I make the Board look different on each client? I want for example to simulate that the board is between both players so that they are in front of each other. The handling of the clicks is easy, but I don’t know how to change the scene on each client. Thanks in advance. Javi. You could try something when a match begins such as: if (!LocalPlayer.isServer) { // TODO: Rotate Camera 180 degrees } I just downloaded and tried your game. Thank you very much. this was exactly what i was looking for. And your tutorial gave me a very clear image about how networking works. Glad to have helped! Thanks so much for the tutorial, it’s fun, clear and easy to follow. I apologize if this is a obvious question but I’m very new to networking: how can I set up the game to actually run on separate machines? You’re welcome, I’m glad you enjoyed it. The short answer is that Unity sets up the code to allow the game to run on separate machines. The code that worked on the Simulator vs Executable is the same code that will run on distributed executables. How you handle matchmaking is another topic (multiplayer lobbies etc) that I haven’t dabbled in, but hopefully the Unity manual can give you a general idea. Great, I’ll look into it. Thanks! Hi, I have another question. I’d like to adapt the game to be asynchronous, as in the two players don’t have to both be connected to the server simultaneously, but instead could take their turns at different times. Basically like Words with Friends. How can I go about doing that? Thanks again 🙂 I am not sure if Unity is designed to handle turn based network games like this. It might be, but I have no experience with it, and the only examples I saw were real-time games. Because of a lack of experience and (IMHO) poor documentation on Unity’s side, I would probably resort to another option depending on my target platform. For example if I were going mobile I could use “Google Play Games Services” (Android) and/or GameKit (iPhone). I might also try things like Firebase or Photon, there are actually a ton of options outside of Unity if you look for them. Unfortunately I have almost no experience with any of those either. Thanks for your great tutorial.Really appreciate it. I planing to modify the game a little bit . Say, if i wanted to have a separate camera for each player and and their rotation will be 0 and 180 deg along the ‘y’ axis (like a card card game where each player sit opposite to each other).I already achieved this by making the camera as a prefab of the player and by using ‘Network Start Position’ component for spawn pos. The problem I have now is that the click hit position is always flipped for one player.Can you give me solution for that.. Thanks again for the great tutorial.. It sounds like you have modified things, so I don’t know how much will apply to your configuration. The code in my project is using the world position from a raycast, which means that clicking the same place on a surface would result in the same coordinate regardless of your camera’s position and rotation. This is also true even if you moved or rotated the board itself. If you wanted to work in a space local to the board there are other methods that might help you such as “Transform.InverseTransformPoint” which can convert a point from world space to local space. This is really great. How would you integrate the default HLAPI Network lobby with this example? Thanks so much for this great demo project. I have a somewhat successful boardgame app out and users have been clamoring for on-line play for ages. This is exactly the type of intro I was looking for to dip my toes in. It’s hard to find examples for turn based games. Glad to have helped! If you end up implementing this for the full game, I would love to know how it goes. Also, share a link for your game 🙂
http://theliquidfire.com/2016/05/05/turn-based-multiplayer-part-5/?replytocom=1571
CC-MAIN-2020-24
refinedweb
2,772
56.15
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 2.1, “Parsing a Number from a String.” Problem You want to convert a String to one of Scala’s numeric types ( Byte, Double, Int, Float, Long, Short). Solution Use the to* methods that are available on a String (courtesy of the StringLike trait): scala> "100".toInt res0: Int = 100 scala> "100".toDouble res1: Double = 100.0 scala> "100".toFloat res2: Float = 100.0 scala> "1".toLong res3: Long = 1 scala> "1".toShort res4: Short = 1 scala> "1".toByte res5: Byte = 1 Be careful because these methods can throw the usual Java NumberFormatException: scala> "foo".toInt java.lang.NumberFormatException: For input string: "foo" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:449) ... more output here ... BigInt and BigDecimal instances can also be created directly from strings (and can also throw a NumberFormatException): scala> val b = BigInt("1") b: scala.math.BigInt = 1 scala> val b = BigDecimal("3.14159") b: scala.math.BigDecimal = 3.14159 Handling a base and radix If you need to perform calculations using bases other than 10, you’ll find the toInt method in the Scala Int class doesn’t have a method that lets you pass in a base and radix. To solve this problem, use the parseInt method in the java.lang.Integer class, as shown in these examples: scala> Integer.parseInt("1", 2) res0: Int = 1 scala> Integer.parseInt("10", 2) res1: Int = 2 scala> Integer.parseInt("100", 2) res2: Int = 4 scala> Integer.parseInt("1", 8) res3: Int = 1 scala> Integer.parseInt("10", 8) res4: Int = 8 If you’re a fan of implicit conversions, you can create an implicit class and method to help solve the problem. As described in Recipe 1.11, “Add Your Own Methods to the String Class,” create the implicit conversion as follows: implicit class StringToInt(s: String) { def toInt(radix: Int) = Integer.parseInt(s, radix) } Defining this implicit class (and bringing it into scope) adds a toInt method that takes a radix argument to the String class, which you can now call instead of calling Integer.parseInt: scala> implicit class StringToInt(s: String) { | def toInt(radix: Int) = Integer.parseInt(s, radix) | } defined class StringToInt scala> "1".toInt(2) res0: Int = 1 scala> "10".toInt(2) res1: Int = 2 scala> "100".toInt(2) res2: Int = 4 scala> "100".toInt(8) res3: Int = 64 scala> "100".toInt(16) res4: Int = 256 See Recipe 1.11 for more details on how to implement this solution outside of the REPL. Discussion If you’ve used Java to convert a String to a numeric data type, the NumberFormatException is familiar. However, Scala doesn’t have checked exceptions, so you’ll probably want to handle this situation differently. First, you don’t have to declare that Scala methods can throw an exception, so it’s perfectly legal to declare a Scala method like this: // not required to declare "throws NumberFormatException" def toInt(s: String) = s.toInt If you’re going to allow an exception to be thrown like this, callers of your method might appreciate knowing that this can happen. Consider adding a Scaladoc comment to your method in this case. If you prefer to declare that your method can throw an exception, mark it with the @throws annotation, as shown here: @throws(classOf[NumberFormatException]) def toInt(s: String) = s.toInt This approach is required if the method will be called from Java code, as described in Recipe 19.2., “Add Exception Annotations to Scala Methods to Work with Java.” However, in Scala, situations like this are often handled with the “Option/Some/None” pattern, as described in Recipe 20.6. With this approach, define the toInt method like this: def toInt(s: String):Option[Int] = { try { Some(s.toInt) } catch { case e: NumberFormatException => None } } Now you can call the toInt method in several different ways, depending on your needs. The preferred approach is to use a match expression. You can write a match expression to print the toInt result like this: toInt(aString) match { case Some(n) => println(n) case None => println("Boom! That wasn't a number.") } You can also write it as follows to assign the result to a variable: val result = toInt(aString) match { case Some(x) => x case None => 0 // however you want to handle this } If these examples haven’t yet sold you on the Option/Some/None approach, you’ll see in the Collections chapter that this pattern is incredibly helpful and convenient when working with collections. You can also use getOrElse: println(toInt("1").getOrElse(0)) // 1 println(toInt("a").getOrElse(0)) // 0 // assign the result to x val x = toInt(aString).getOrElse(0) Alternatives to Scala’s Option If you like the Option/Some/None concept, but need access to the exception information, there are several additional possibilities: - Try, Success, and Failure (introduced in Scala 2.10) - Either, Left, and Right These alternate approaches are discussed in Recipe 20.6, “Using the Option/Some/None Pattern.” (The new Try/Success/Failure approach is especially appealing.) See Also - Recipe 20.6, “Using the Option/Some/None Pattern” - The Scala StringLike trait The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations:
http://alvinalexander.com/index.php/scala/how-parse-number-from-string-int-long-float-double
CC-MAIN-2019-51
refinedweb
895
67.35
RE: To Troll or Not To Troll - From: "Warren DeLano" <warren@xxxxxxxxxx> - Date: Thu, 4 Dec 2008 12:11:15 -0800 I still would have to call your management of the problem considerably into question - your expertise at writing mathematical software may not be in question, but your skills and producing and managing a software product are. You have nobody at your organization, which sells a product that relies on Python, who follows python-dev? Or who even reads the changelogs for new python versions? You should have known about the "as" keyword change *over a year ago*, even if the import bug was masking the deprecation warning. Everything else aside, I can't get past that issue with your complaints. I *have* gone back now and read all the posts in all the threads and I still have not seen a single post from you even hinting that you might have any responsibility in the matter. Well then, let me set the record straight on that one point: I admit that it was entirely my mistake (and mine alone) to implicitly assume, by adopting such a logging & persistence architecture (dating back to 1.5.2, mind you!), that new keywords would not be introduced into the Python language so as to potentially break all existing Python code. Silly me! How unreasonable. . - Prev by Date: Checking if an int fits in 32 bits? - Next by Date: Re: "as" keyword woes - Previous by thread: Re: To Troll or Not To Troll - Next by thread: Re: To Troll or Not To Troll - Index(es):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-12/msg00543.html
CC-MAIN-2014-52
refinedweb
262
64.95
$ cnpm install rollup-endpoint Easily serve a JavaScript bundle – bundled with rollup.js – from an express.js endpoint. No grunt/gulp, no build files, no required configuration – just pure data. $ npm install rollup-endpoint --save Assuming you have the following directory structure: client/ └── main.js server.js package.json Then you can write the following as your server.js: // server.js var rollup = require('rollup-endpoint'); var app = require('express')(); app.get('/assets/app-bundle.js', rollup.serve({ entry: __dirname + '/client/main.js' })); console.log("Listening on port 5555..."); app.listen(5555); Then run node server.js. Now any GET request to localhost:5555/assets/app-bundle.js will compile and rollup the JS file located at ./client/main.js. Any import statements within main.js will be included in the final output, too. rollup-endpoint passes all your options along to rollup itself, so you can specify any option as described in the rollup JavaScript API. When the NODE_ENV environment variable is set to production, rollup-endpoint will automatically cache and gzip your bundle output. Plugins are configured in the same way as rollup's JavaScript API. Here's a useful example. In production, you might want to transpile your code to ES5, as well as minify it. However, you probably don't want waste CPU cycles doing the same in development. Here's how you can do that: var rollupOptions = { entry: 'my-file.js' }; if ( process.env.NODE_ENV === 'production' ) { rollupOptions.plugins = [ require('rollup-plugin-buble')(), require('rollup-plugin-uglify')(), ] } app.get('/app-bundle.js', rollup.serve(rollupOptions)) If you need to configure the rollup generate options, you can pass them as generateOptions: app.get('/assets/app-bundle.js', rollup.serve({ entry: __dirname + '/client/main.js', generateOptions: { format: 'amd', sourceMap: true, // defaults to `false` in production } }));
https://developer.aliyun.com/mirror/npm/package/rollup-endpoint
CC-MAIN-2020-24
refinedweb
297
53.98
<p>TL;DR Looking for a way too share state over cluster without database </p> <p>Sup. Im trying to build a highload service (its kinda pet ) and it should take about 300-500 requests per second. Its should keep in RAM about 500-700Mb stack of text lines (in bytes i guess), return one line by request and iterate to a next one. (gonna add namespaces and, interface and some other funcs, but its not very important here). I guess valyala/fasthttp could give me enough speed for one instance too keep it, but I have about 8 windows servers behind load balancer and able too use it. I think its an overkill to use big database like potgre or mysql too save line number so I very interested (not even for this project but for an my erudition) is there good way to keep state over cluster. I have only one idea. Connect incenses to a chain and send changes to a neighbors. But it seems not very reactive. Also I can request state for each iteration from neighbor and take newest but generates overhead. </p> <p>And yeah, I know, If I gonna build cluster from scratch I should handle concurentions and locks. </p> <hr/>**评论:**<br/><br/>justinisrael: <pre><p>Its not clear to me what you consider to be a database or not in this case. Are you against any type of broker service storing the data for your cluster? Are you just against a formal database like postgres? You could use something like Kafka as an event source that keeps your cluster in sync. Or something like redis if you need to treat the data like a queue. Nats.io can provide queue semantics if you have 8 servers that need to work on FIFO data. </p></pre>recurrency: <pre><p>I'm a huge advocate of the Kappa architecture: <a href="" rel="nofollow"></a>. Tl;dr: Just use Kafka as a WAL for all your state.</p></pre>jerf: <pre><p>The right answer depends on a lot of parameters you don't give here, like how important it is for it to never give out identical lines, how distributed you need this to be, how catastrophic it is if you lose one node, etc. </p> <p>If it doesn't matter to you much what is going on, and you don't mind manual intervention on node failure, a simple socket on a centralized server that hands out numbers on demand will work just fine. But those are big "ifs". If you need clusterability and automatic failover and so on and so on, you end up still coming out ahead to just use a database or something, because that will still be easier than writing something <em>correct</em> with all those properties.</p> <p>Edit: There are servers that have as a feature this "counter" ability... Keep an eye out for those, and <em>read the fine print</em> on exactly what they provide.</p></pre>user3961: <pre><p>Etcd</p></pre>cittatva: <pre><p>Raft</p></pre>014a: <pre><p>Keep state in the client. Send an offset or something with every request.</p> <p>If that's not an option: </p> <p>Go isn't really designed to be like Erlang/Elixir, where you can do IPC between instances of your program to communicate that state. For better or worse. There might be ways to set up some sort of consensus algorithm between your apps, but its not worth it.</p> <p>Just use a database. Etcd is probably the easiest one to set up and its blazing fast. Consul or Redis would also work.</p></pre>
https://studygolang.com/resources/13654
CC-MAIN-2022-33
refinedweb
618
71.34
Downloading?. - Send the HTTP request and get the HTTP response from the web server. - Save the contents in the HTTP response to a local file. Construct the HTTP get request The following code segment prepares an instance of the System.Net.HttpWebRequest class to download my website logo. // Construct HTTP request to get the logo HttpWebRequest httpRequest = (HttpWebRequest) WebRequest.Create(""); httpRequest.Method = WebRequestMethods.Http.Get; Send the HTTP request and get the HTTP response from the web server. // Get back the HTTP response for web server HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); Stream httpResponseStream = httpResponse.GetResponseStream(); Save the file to your local disk // Define buffer and buffer size int bufferSize = 1024; byte[] buffer = new byte[bufferSize]; int bytesRead = 0; // Read from response and write to file FileStream fileStream = File.Create("techcoil-logo.png"); while ((bytesRead = httpResponseStream.Read(buffer, 0, bufferSize)) != 0) { fileStream.Write(buffer, 0, bytesRead); } // end while Download a file via HTTP post Sometimes, there is a need for the client to supply some information to the web server in order to download a file. This can be a case when we want to control how files are downloaded. We can refer to every file in our web server with a unique id and write a server script to serve the respective file based on the id received from the client. For the sake of demonstration, I had published a server script, "/poc/downloadPng.php", that will read a post variable labelled as id. If id is 1, it will write my website logo to the client, else it will write my 404 icon image to the client. As with HTTP get, downloading of a file from the web server via HTTP post in C# consists of three main steps: - Construct the HTTP post request to send to the web server. - Send the HTTP request and get the HTTP response from the web server. - Save the contents in the HTTP response to a local file. Since step 2 and 3 are identical, I will just discuss step 1. Construct the HTTP post request to send to the web server. // Construct HTTP request to get the file HttpWebRequest httpRequest = (HttpWebRequest) WebRequest.Create(""); httpRequest.Method = WebRequestMethods.Http.Post; // Include post data in the HTTP request string postData = "id=1"; httpRequest.ContentLength = postData.Length; httpRequest.ContentType = "application/x-www-form-urlencoded"; // Write the post data to the HTTP request StreamWriter requestWriter = new StreamWriter( httpRequest.GetRequestStream(), System.Text.Encoding.ASCII); requestWriter.Write(postData); requestWriter.Close(); Namespaces to include The following namespaces need to be included in order for the above mentioned codes to compile. Related posts - Handling web server communication feedback with System.Net.WebException in C# - Sending a file and some form data via HTTP post in C# - Uploading large HTTP multipart request with System.Net.HttpWebRequest in C# - How to build a web based user interaction layer in C# - How to send HTTP post requests and HTTP get requests using jQuery - PHP codes to tell browsers to open the download dialog box for users to download a file
https://www.techcoil.com/blog/downloading-a-file-from-via-http-post-and-http-get-in-c/
CC-MAIN-2017-17
refinedweb
506
55.24
Opening Filesystems¶ Generally, when you want to work with the files and directories of any of the supported filesystems, you create an instance of the appropriate class. For example, the following opens the directory /foo/bar: from fs.osfs import OSFS my_fs = OSFS('/foo/bar') This is fine if you know beforehand where the directory you want to work with is located, and on what medium. However, there are occasions where the location of the files may change at runtime or should be specified in a config file or from the command line. In these situations you can use an opener, which is a generic way of specifying a filesystem. For example, the following is equivalent to the code above: from fs.opener import fsopendir my_fs = fsopendir('/foo/bar') The fsopendir callable takes a string that identifies the filesystem with a URI syntax, but if called with a regular path will return an OSFS instance. To open a different kind of filesystem, precede the path with the required protocol. For example, the following code opens an FTP filesystem rather than a directory on your hard-drive: from fs.opener import fsopendir my_fs = fsopendir('') For further information regarding filesystem openers see fs.opener.
http://pyfilesystem.readthedocs.io/en/latest/opening.html
CC-MAIN-2017-47
refinedweb
202
60.45
What is the best Run Time you people are getting. I got a run time of 532 ms with the following solution in JAVA. Does anyone have a more optimized solution in java or any other language. I'd love to see it. Thanks! public class Solution { public int evalRPN(String[] tokens) { Stack<Integer> operands = new Stack<Integer>(); int answer = 0; String operators = "+-*/"; for(String op : tokens){ int index = operators.indexOf(op); if(index == -1){ operands.push(Integer.valueOf(op)); } else{ if(operands.size() < 2){ return 0; //raise exception? } int op2 = operands.pop(); int op1 = operands.pop(); switch (index){ case 0: answer = op1 + op2; break; case 1: answer = op1 - op2; break; case 2: answer = op1 * op2; break; case 3: if(op2 == 0){ return 0; } answer = op1 / op2; break; default: return 0; } operands.push(answer); } } return operands.pop(); } } My CPP solution which is very similar to yours takes 12ms on OJ. class Solution { public: int evalRPN(vector<string> &tokens) { stack<int> s; for (unsigned i = 0; i < tokens.size(); ++i) { { s.push(atoi(tokens[i].c_str())); } } return s.top(); } }; Wow, thanks for sharing. That's a huge difference. Any thoughts on why the difference, or how my java code can be optimized? One of the reasons is obviously that Stack in java uses Integer object instead of primitive int, so theres the overhead of Boxing and Unboxing involved. But that still doesn't explain the big difference. I have tried three implementations: - with a Stack (524ms) - with an array simulating a stack and avoid boxing(544ms) - with a HashMap (488ms) I think your have the same complexity. Slowness is due to overhead somewhere. Thanks, this is really interesting. I mean the difference is just too big to not care. my code in cpp is very similar to that of bigOh. i have just avoided the redundant statements of popping, and used switch case instead. my code takes 56 ms. class Solution { public: int evalRPN(vector<string> &tokens) { stack<int> S; for(int i = 0; i<tokens.size();++i) { if(tokens[i] == "+" || tokens[i] == "-" || tokens[i] == "*" || tokens[i] == "/" ) { int b = S.top(); S.pop(); int a = S.top(); S.pop(); char c = *(tokens[i].c_str()); // dereferencing the pointer to string to get char value. switch(c) { case '+' : S.push(a + b); break; case '-' : S.push(a - b); break; case '*' : S.push(a * b); break; case '/' : S.push(a / b); break; } } else S.push(atoi(tokens[i].c_str())); } return S.top(); } }; Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/2868/best-run-time-for-reverse-polish-notation
CC-MAIN-2017-43
refinedweb
424
70.8
Search for a block special device in the filesystem table (/etc/fstab) file #include <fstab.h> struct fstab * getfsspec(const char *spec); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The getfsfile() function searches the filesystem table (/etc/fstab) for an entry for the specified block special device. A pointer to the fstab structure for the block special device (see the entry for getfsent()), or NULL if the entry couldn't be found. NetBSD The functions that work with /etc/fstab use static data storage; if you need the data for future use, copy it before any subsequent calls overwrite it. endfsent(), getfsent(), getfsfile(), mount(), setfsent() /etc/fstab in the Utilities Reference
https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/g/getfsspec.html
CC-MAIN-2018-13
refinedweb
123
63.09
Python Certification Training for Data Scienc ... - 48k Enrolled Learners - Weekend/Weekday - Live Class Python programming language is one of the most popular language nowadays. It has numerous applications and developers are switching over to python for the implementation it provides us with. The modular programming approach where the code is broken down into separate parts is where python modules comes into picture. This article will help you understand the above topic in detail. Following are the topics that will be covered in this blog: Modules are simply a ‘program logic’ or a ‘python script’ that can be used for variety of applications or functions. We can declare functions, classes etc in a module. The focus is to break down the code into different modules so that there will be no or minimum dependencies on one another. Using modules in a code helps to write lesser line of codes, a single procedure developed for reuse of the code as well. It also eliminates the need to write the same logic again and again. One more advantage of using modules is that the programs can be designed easily, since a whole team works only on a part or module of the entire code. Lets try to understand this with an example: Suppose you want to make a program for a calculator. There will be operations like addition, subtraction, multiplication, division etc. We will break the code into separate parts, we can simply create one module for all these operations or separate modules for each of the operations. And then we can call these modules in our main program logic. The idea is to minimize the code, and if we create modules, it doesn’t mean we can only use it for this program, we can even call these modules for other programs as well. Now that we have understood the concept of modules, lets try to understand how we can create a module in python. Creating a module in python is similar to writing a simple python script using the .py extension. For the above example lets try to make a module for the various operations. def add(x,y): return x + y def sub(x, y): return x - y def prod(x, y): return x * y def div(x, y): return x / y Save the above code in a file Calc.py. This is how we create a module in python. We have created different functions in this module. We can use these modules in our main file, lets take a look at how we are going to use them in a program. We will use the import keyword to incorporate the module into our program, from keyword is used to get only a few or specific methods or functions from a module. Lets see what are different methods to use a module in your program. Lets say we have our file with a name main.py. import calc as a a = 10 b = 20 addition = a.add(a,b) print(addition) In the above code, we have created an alias using the as keyword. The output of the above code will be the addition of the two numbers a and b using the logic specified in the add function in the calc.py module. Lets take a look at another approach. from calc import * a = 20 b = 30 print(add(a,b)) In the above code, we have imported all the functions using the asterisk and we can simply mention the function name to get the results. When we import a module, the interpreter looks for the module in the build-in modules directories in sys.path and if not found, it will look for the module in the following order: import sys print(sys.path) When you run the above code, you will get the list of directories. You can make changes in the list to create your own path. Built-in modules are written in C and integrated with python interpreter. Each built-in module contains resources for certain specific functionalities like Operating system management, disk input/output etc. The standard library also has many python scripts containing useful utilities. There are several built-in modules in python at our disposal that we can use whenever we want. To get the list of all the modules in python, you can write the following command in the python console. help('modules') You will get a list of all the modules in python. Below are a few modules in python. data-src=/blog/content/ver.1556540029/uploads/2019/05/2019-05-10-17_16_33-Window.png dir( ) Built-in Function It returns a sorted list of strings containing the names defined in a module. The list contains the names of all the variables, functions, classes etc. import calc print(dir(calc)) You will get the list output like this: data-src=/blog/content/ver.1556540029/uploads/2019/05/2019-05-10-17_20_44-Window-1.png Similarly, you can get the names defined in any module using the dir( ) function. In this blog, we have learnt about modules in python, how we can create a module and use it in the program. We have also learnt about the built in modules in python. Python programming language has enormous applications and with the use of modules, the task becomes easier, maintainable and efficient. If you wish to master your skills in python programming language you can enroll for the python certification course to kick-start your learning and become a python developer. If you have any questions? mention them in the comments, we will get back to you.
https://www.edureka.co/blog/python-modules/
CC-MAIN-2019-39
refinedweb
935
62.88
Mission1_Blink As you get a new board, if you don't have some previous knowledge, you might not be able to get it to work out of the box. It is so discouraging. So this first project would like to get everyone started with electronic stuff and the Swift language. You will start with the hello world project - blink the LED. You will make the LED on and off alternatively to get it to blink. Let's break all stuff down to see how it works. What you need - SwiftIO board You can notice there is an onboard LED (marked with a red box above). You will only deal with it in this project, and there is no need for other components. Circuit Just connect the SwiftIO board to your computer through the download port using a USB cable. There are two ports on the board. The one beside the golden ring is the download port. Example code // Import the SwiftIO library to use everything in it. import SwiftIO // Import the board library to use the Id of the specific board. import MadBoard // Initialize the blue LED let led = DigitalOut(Id.BLUE) // The code here will run all the time. while true { // Set Blue LED off. led.write(true) // Interval of LED blink (milliseconds). sleep(ms: 1000) // Set Blue LED on. led.write(false) sleep(ms: 1000) } Background Digital signal The digital signal usually has two states. Its value is either 1 or 0. For the SwiftIO board, 1 represents 3.3V, and 0 represents 0V. There are also other ways to express the same meaning: high or low, true or false. In this project, can only flow in one direction, from positive to negative. You need to connect the positive leg to the current source. Only when you connect it in the right direction, the current can flow. There are two ways to connect the LED: - Connect the LED to the power and a digital pin. Since the current always flows from high to low voltage, if the pin outputs a high voltage, there is no voltage difference between the two ends of the LED, so the LED is off. When the pin outputs a low voltage, the current can flow from the power to that pin, and the LED will be on. This is how the onboard LED works. - Connect the LED to the digital pin and ground. If the pin outputs a high voltage, the current flows from that pin to the ground, and the LED will be on. If it outputs a low voltage, the LED is off. You can find an RGB LED on your board. It is a different type from the images above for easier soldering. It has three colors: red, green and blue. As you download the code, it serves as a status indicator. Besides, you can also control its color and state by setting the output voltage. You can light any, so it will appear red, green, or blue. You can also turn on two of them. If you turn on red and blue, you can notice it appears magenta. If all three are on, the LED seems to be white. The onboard LED is connected to 3.3V internally. If you set it to high voltage, there will be no current. So it will be lighted when you apply low voltage. Code analysis Let's look into the code in detail: import SwiftIO import MadBoard SwiftIO consists of all the functionalities to control your board. All programs must first reference it so you can use everything in it, like classes and functions. SwiftIOBoard defines the corresponding pin id of the SwiftIO board. The pins of different boards are different. So this library tells the IDE you are dealing with the SwiftIO board, not any others. Then you can use the id in it. let led = DigitalOut(Id.BLUE) Before you set a specific pin, you need to initialize it. - First, declare a constant: use the keyword letfollowed by a constant name led. - Then make it an instance of DigitalOutclass and initialize that pin. - To initialize the pin, you need to indicate its id. All ids are in an enumeration, and the built-in RGB LEDs use the id RED, GREEN, or BLUE. Thus the id of blue LED here is written as Id.BLUEusing dot syntax. while true { led.write(true) sleep(ms: 1000) led.write(false) sleep(ms: 1000) } In the dead loop while true, all code in the brackets will run over and over again unless you power off the board. The method write(_:) is used to set the pin to output high or low voltage. Its parameter is a boolean type: true corresponds to a high level, and false corresponds to a low level. And as mentioned above, you need to set a low voltage to turn on the LED. The function sleep(ms:) will stop the microcontroller's work for a specified period. It needs a period in milliseconds as its parameter. It is a global function in the library, so you can directly use it in your code. So the code above makes LED alternate between off and on every second. Reference DigitalOut - set whether the pin output a high or low voltage. sleep(ms:) - suspend the microcontroller's work and thus make the current state last for a specified time, measured in milliseconds. SwiftIOBoard - find the corresponding pin id of SwiftIO board.
https://docs.madmachine.io/tutorials/swiftio-maker-kit/mission1
CC-MAIN-2022-21
refinedweb
909
84.47
Append /start to a ping URL and use it to signal when a job starts. After receiving a start signal, Healthchecks.io will show the check as "Started." It will store the "start" events and display the job execution times. Healthchecks.io calculates the job execution times as the time gaps between adjacent "start" and "success" events. Healthchecks.io applies an additional alerting rule for jobs that use the /start signal. If a job sends a "start" signal, but then does not send a "success" signal within its configured grace time, Healthchecks.io will assume the job has failed. It will mark the job as "down" and send out alerts. Below is a code example in Python: import requests URL = "" # "/start" kicks off a timer: if the job takes longer than # the configured grace time, the check will be marked as "down" try: requests.get(URL + "/start", timeout=5) except requests.exceptions.RequestException: # If the network request fails for any reason, we don't want # it to prevent the main job from running pass # TODO: run the job here fib = lambda n: n if n < 2 else fib(n - 1) + fib(n - 2) print("F(42) = %d" % fib(42)) # Signal success: requests.get(URL) When Healthchecks.io receives a "start" signal followed by a regular ping or a "fail" signal, and the two events are less than 24 hours apart, you will see the time delta displayed in the list of checks. If the two events are more than 24 hours apart, they are assumed to be unrelated, and the time delta is not displayed. You can also see durations of the previous runs when viewing an individual check:
https://healthchecks.io/docs/measuring_script_run_time/
CC-MAIN-2021-43
refinedweb
279
71.85
◕ A new product. Popular Google Pages: This article is regarding How to Compile and run a Java Program? Last updated on: 15th January 2017. ◕ How to Compile and run a Java Program? - Java source code is always stored in a file with the extension .java. Generally we use text editor like notepad or notepad++ to write our source code. For an example, we can store our source code in a file named as myJavaprogram.java. After we create a source code file for our program, first we need to compile our source code using the java compiler. Then we can run that Java Program. Java compiler comes by-default with the JDK ( Java Development Kit ). ◕ To compile and run the myJavaprogram.java file please follow the following process: - Make a folder in the c:\ drive and give it a name. Say we named the folder as test. - Put the myJavaprogram.java file in this folder. And put the following Java code on it and save it. public class apples{ public static void main (String args[]){ system.out.println("I love APPLE!"); } } - Go to run box ( Window button + R ) - Type cmd and press enter. - Console window will open - Type cd\ and press enter. This will bring us on the C:\ drive. - Type dir and press enter. This will give us the all the directory list of the C:\ drive. - Type cd test and press enter. This means, we want to Chang Directory to test. In simple, we want to go on the test folder. - Type dir and press enter. Again this will open all the file names which are there on the test folder. - Type javac myJavaprogram.java and press enter. This will compile the Java file myJavaprogram.java A new compiled file will appear with the name of the class (here it is apples) with an extension of .class. So here the new compiled file will be apples.class. - Type java apples and press enter. We will get the result I love APPLE! Congrats! You have run your 1st Java Program successfully. Please note: Now-a-days we can test Java programs through different testing tools. But for learning the above process is the best.
http://riyabutu.com/java-articles/compile-run-java-program.php
CC-MAIN-2017-51
refinedweb
365
78.96
Problem with MouseArea in delegate Hi everybody! I have a repeater which inserts rectangles in a GridLayout (see below), each rectangle holding a MouseArea. I want to catch the following mouse events: 1.) The user clicks one of the rectangles and keeps the mouse button pressed. --> catch onPressed 2.) The user moves the mouse around, still pressing the button --> catch onEntered from every rectangle the mouse pointer enters 3.) The user releases the mouse button --> catch onReleased Somehow I am not able to implement this. If I set mouse.accepted = false in my onPressed-handler, I get the following onEntered-events, but never a onReleased. If I do mouse.accepted = true in onPressed, I get a final onRelaesed-event, but never a onEntered. I tried to put another MouseArea in front or behind the GridLayout, but nothing seems to work. Google was no help, either. Could anybody point me in the right direction? Thanks P.S.: My code: GridLayout { id: fieldGrid anchors.fill:parent //... Repeater { id: repeater model: game.board // My delegate Rectangle { id: del // coord comes from model Layout.column: coord.x Layout.row: coord.y Layout.fillWidth: true Layout.fillHeight: true //... MouseArea { id: delMouseArea anchors.fill: parent acceptedButtons: Qt.LeftButton hoverEnabled: true onPressed: { console.log("onPressed") mouse.accepted = false; } onReleased: { console.log("onReleased") } onEntered: { console.log("onEntered") } } } } } Hi @nnnnn and Welcome, Putting GridLayoutinside MouseAreashould work. In that case add onReleasedevent handler for that MouseArea. Now once this event is triggered you just have to find the child of GridLayoutunder that mouse location. Like this: MouseArea { anchors.fill:parent onReleased: console.log(fieldGrid.childAt(mouseX,mouseY).objectName) GridLayout { id: fieldGrid ... ... This will print the object name of the Rectangleunder that mouse location. The problem why we need to do this is because that event do not work if area moves under the mouse. This is also working for me: GridView { id: grid clip:true anchors.fill: parent cellWidth: Units.dp(80) cellHeight: Units.dp(80) interactive: false delegate: Rectangle { id: rect width: grid.cellWidth - Units.dp(5) height: grid.cellHeight - Units.dp(5) color: backcolor MouseArea { hoverEnabled: true anchors.fill: parent onPressed: { console.log("pressed") } onReleased: { console.log("released") } onEntered: { console.log("entered"); } } } Component.onCompleted: { for(var i=0;i<36;i++) { grid.model.append({backcolor: "#E0E0E0"}); } } model: ListModel { } } Thanks for your answer! Unfortunately, this did not solve my problem. With an additional MouseArea (as parent of the GridLayout) I can capture the Released-events (form the outer MouseArea), the Pressed-event (from the inner MouseAreas), however I do not see any Entered-events anymore :-( @nnnnn That's strange. Is there any other component present apart from the code that you posted earlier ? I tried your example here - I do not get entered-Events when I move the mouse pointer around while the button is pressed; same problem as in my code. Once again my scenario - to avoid misunderstandings: Go to a cell, press the button, keep button pressed, move the pointer to other cells, release the button. In this case, I would like a pressed-event from the first cell, entered-events from all others, and a released-event from the last one. Seems to be tricky... Cheers Hi! Here is a complete example (which is unfortunately not doing what I want). Anything wrong here? import QtQuick 2.4 import QtQuick.Window 2.2 import QtQuick.Layouts 1.1 Window { visible: true color: "black" MouseArea { id: outer anchors.fill: parent onReleased: { console.log("outer - onReleased") } GridLayout { id: fieldGrid anchors.fill:parent rows: 2 columns: 2 Repeater { id: repeater model: myModel Rectangle { id: del Layout.column: col Layout.row: row Layout.fillWidth: true Layout.fillHeight: true Text { anchors.centerIn: parent text: name } MouseArea { id: inner anchors.fill: parent hoverEnabled: true onPressed: { console.log("inner - onPressed") mouse.accepted = false; } onEntered: { console.log("inner - onEntered") } } } } } } ListModel { id: myModel ListElement { row: 0; col: 0; name: "a"} ListElement { row: 0; col: 1; name: "b"} ListElement { row: 1; col: 0; name: "c"} ListElement { row: 1; col: 1; name: "d"} } } Really? You get Entered-events while you keep the mouse button pressed? Ok, I will check with Qt 5.5 here, I wanted to update anyway. Thanks for your help, I will post how things are going... Hmm, it's not working for me under Qt5.5/Kubuntu 14.04 either. No entered-Events when I move the mouse around while pressing the mouse button. Ah...got your point. The events are not triggered when you press a button, hold this button and move to another button. THAN the events for the "new" entered button are not triggered. I did not know a solution so far... I really find no solution for this using MouseArea. My only hope now is that I can find a workaround using a MultiPointTouchArea, but maybe this is something completely different (have to check to documentation first...) I have a simple solution now, using only one MouseArea on top and looking for onPositionChanged. I don't think it is very beautifuly (I have to call childAt() all the time), but at least it is working. import QtQuick 2.4 import QtQuick.Window 2.2 import QtQuick.Layouts 1.1 Window { visible: true color: "black" GridLayout { id: fieldGrid anchors.fill: parent rows: 2 columns: 2 Repeater { id: repeater model: myModel Rectangle { id: del Layout.column: col Layout.row: row Layout.fillWidth: true Layout.fillHeight: true Text { anchors.centerIn: parent text: name } } } } MouseArea { anchors.fill: parent property point pos: Qt.point(-1, -1) onPressed: { var obj = fieldGrid.childAt(mouse.x, mouse.y) if ( !obj ) return; console.log("onPressed") pos = Qt.point(obj.Layout.row, obj.Layout.column) console.log("newField: " + pos) } onReleased: { console.log("onReleased") } onPositionChanged: { var obj = fieldGrid.childAt(mouse.x, mouse.y) if ( !obj ) return; var newPos = Qt.point(obj.Layout.row, obj.Layout.column) if ( newPos === pos ) return; console.log("newField: " + newPos) pos = newPos; } } ListModel { id: myModel ListElement {row: 0; col: 0; name: "a" } ListElement {row: 0; col: 1; name: "b" } ListElement {row: 1; col: 0; name: "c" } ListElement {row: 1; col: 1; name: "d" } } } @nnnnn Hmm. I overlooked your onEnteredrequirement. It doesnot work in my example. The problem is that once you set mouse.accepted=falsethe release event is not trapped.
https://forum.qt.io/topic/57859/problem-with-mousearea-in-delegate
CC-MAIN-2017-39
refinedweb
1,033
54.29
I read the c14n spec this morning. They only talk about DTD and do not mention schemas at all. This probably considers some clarification from w3c regarding the spec in regards to XML Schema. One could assume that only if a DTDs are present then the rules should be followed or one could try to duplicate the functionality with XML Schema. I don't know which is best or even appropriate. Noah On Thu, 1 Jul 2004 21:26:33 -0700, Eric Vasilik <ericvas@bea.com> wrote: > > Are you saying that c14n says that the canonicalized form for a document > must include defaulted attributes that were specified by an XmlSchema? > That is to say, if an element does not have attribute x, but the > XmlSchema it is bound has a default attribute value for x, the canonical > form must include that attribute? The spec seems to suggest that the > default attrs specified in a DTD must be included, but, that's a parsing > issue for XmlBeans, and is irrelevant when producing the canonical form > for a loaded document in Xmlbeans. > > Right now, the saver does is not influenced by the schema associated > with the instance being saved. > > - Eric > > -----Original Message----- > From: David Waite [mailto:mass@akuma.org] > Sent: Thursday, July 01, 2004 2:48 PM > To: xmlbeans-dev@xml.apache.org > Subject: Re: xmlbeans xml security > > On Jul 1, 2004, at 3:30 PM, Noah Campbell wrote: > > > I'll assume that BEA's impl is not available for general consumption. > > I dunno what BEA's impl looks like :) > > > > > In regards to the current xmlstore, aren't the namespace names > > synthetic anyway? I mean, you don't need to rely on the name except > > for its ability to link an element, etc to a namespace. If someone is > > passing information through the namespace name then this might be > > considered a potential leak if full infoset is preserved. This is > > probably contrieved and sorta silly but it is still something to > > consider. > > In normal XML, sure. When you start canonicalizing xml, namespace > prefixes matter, because qname types require schema awareness on some > level to identify. If your canonicalized form allowed arbitrary > prefixes to be chosen, someone could conceivably change the meaning of > a document by putting a qname value in a different namespace. > > The only real awareness of a schema or dtd used by canonicalization is > expansion of attribute default types. I believe all the 'secure' > messages (ws-security, saml) avoid default namespaces so they don't > encounter this really ugly side-effect. > > The issue is that when you parse in xml, it uses a qname and stores the > namespace URI and the local name, but not the prefix. If you get a > signed document in which declares a namespace both with a prefix and as > the default namespace (perfectly valid) this breakage of infoset > fidelity will cause the canonical form to differ where the saver chose > a different namespace than was originally used, thus you will get a > different hash out, and have very little idea what happened. > > > > > (a silly side channel attack for example) > > > > <element xmlns: > > <thePassphraseIsCheese:passphraseProtectedElement> > > 09832jkfadilafj#$@#rkfdali9fdalksdjf93aldkfja093ajfd > > <thePassphraseIsCheese:passphraseProtectedElement> > > </element> > > > > Yes, fire the developer who did this ;-) > > The issue is that the W3C allowed content to become dependent on > namespaces, as if namespaces weren't tricky enough as is. XPath > expressions and QName values in attributes and text nodes makes the > namespaces important, as a transformation that changes the namespaces > now needs to understand the context of the document which they are > placed on. So while you could make a filter which replaced the prefix > above with 'x0', it would need to know the schema, and would have to > filter _after_ canonicalization and any validation associated (such as > xml-dsig) > > :
http://mail-archives.apache.org/mod_mbox/xml-xmlbeans-dev/200407.mbox/%3Cde70b39d0407021434b375695@mail.gmail.com%3E
CC-MAIN-2015-32
refinedweb
623
57.61
Modules more efficient and reduce redundancy. Modules bind the code and reduce the repetitions of functions frequently used in the code. Thus, it makes the code much clear and easy to understand. Examples: - OS - Time - Math - MatPlotlib Mechanism of Python Modules The moment a module is imported through a program, Python Interpreter fetches the module from either of the following locations: - Program Directory - The directory in the PYTHONPATH variable - Default directory Listing of Modules The list of available modules in Python can be found out by executing the following command in the command prompt (interpreter shell). >>> help(“module”) Importing modules from Python Standard path Syntax: import module_name Example: import math Importing Modules from other Sources To fetch and use modules from other and new sources, we need to install Python PIP. Python pip is a software that installs python modules from index or using a manager like Anaconda. Run the following command to install modules from new sources using python pip: python -m pip3 install module_name Run the following command to install modules from new sources using Ananconda: conda install module_name Example: Steps to install numpy python -m pip3 install numpy conda install numpy sudo apt install python3-numpy Example: Built-in Modules import math print (math.sqrt(121)) print (math.pi) print (dir(math)) Output: 11.0 3.141592653589793 [‘doc’, ‘loader’, ‘name’, ‘package’, ‘spec’, ‘acos’, ‘acosh’, ‘asin’, ‘asinh’, ‘atan’, ‘atan2’, ‘atanh’, ‘ceil’, ‘copysign ‘, ‘cos’, ‘cosh’, ‘degrees’, ‘e’, ‘erf’, ‘erfc’, ‘exp’, ‘expm1’, ‘fabs’, ‘factorial’, ‘floor’, ‘fmod’, ‘frexp’, ‘fsum’, ‘gamma’, ‘hypot’, ‘isf inite’, ‘isinf’, ‘isnan’, ‘ldexp’, ‘lgamma’, ‘log’, ‘log10’, ‘log1p’, ‘log2’, ‘modf’, ‘pi’, ‘pow’, ‘radians’, ‘sin’, ‘sinh’, ‘sqrt’, ‘tan’, ‘t anh’, ‘trunc’] In the above example, dir() method gives the function name, variables, etc in the math module. Variable in a Module Apart from methods and classes, A module can also contain variables. Example: Fruit = { "name": "Apple", "color": "Green" } Save the above snippet of code in the file Module1.py import Module1 x = Module1.Fruit["name"] print(x) Output: Apple In the above piece of code, Module1 is imported and functionality is performed on it. Difference between a module and a package in Python Python Module: These are set of pre-defined files that contain the python codes which depict the basic functionalities of class, methods, variables, etc. Python Package: It is a directory that holds and contains modules and sub-packages.
https://www.askpython.com/python-modules/python-modules
CC-MAIN-2020-10
refinedweb
391
56.08
Thank you Mike for those explanations.Thank you Mike for those explanations.Why Saxon doesn't allow to use variables in template declarations ? Because the spec doesn't allow them. Why doesn't the spec allow them? Because they can cause circular definitions. For example <xsl:variable <xsl:apply-templates </xsl:variable> <xsl:template ... </xsl:template> <xsl:template ... </xsl:template> However, the spec doesn't succeed in preventing circularity, because you can still write: <xsl:variable <xsl:apply-templates </xsl:variable> <xsl:template <xsl:if ... </xsl:template> So in my view the restriction was probably a mistake. But Saxon will conform to the spec, whether I like th e spec or not. Mike Kay <xsl:template ... And now, what about matching a parameter passed to the stylesheet ? : <xsl:template ... I mean, I don't know how to implement it in an other way because the namespace used is known by another stylesheet. (I get a lot of problems here because of this §5.3 spec chapter) I attach 2 stylesheets of the Cocoon 2 project (the author : Sylvain is a colleague of mine). In this case, Xalan (actual release) works perfectly well (but don't respect the spec !) and Saxon (6.4.4) does something very strange : instead of warning about the use of a variable in the template match attribute, it doesn't match the template at all and run without warning. Now, I ask to myself two questions : - Why Sylvain doesn't obtain the Saxon warning I obtained with my stylesheet (see the first ex ample I wrote yesterday) ? - How to implement it with this spec restriction to make it work also with Saxon ? Thank you .Thibs. Anyware Technologies
https://sourceforge.net/p/saxon/mailman/attachment/3BA9B6E6.8020201@anyware-tech.com/1/
CC-MAIN-2017-39
refinedweb
281
67.15
Uche Ogbuji <uche.ogbuji at fourthought.com> wrote: > I disagree, and I use CDATA sections a lot. Try writing an article > about XML *in* XML (e.g. XHTML). You might also become a fan :-) I think that's the toolchain's job. In an ideal world there'd be an XML editor that wasn't awful (!) but it's easy enough with a decent text editor to write some XML, select it and encode/decode the offending characters. S'what I do, anyway. :-) > As long as people understand that they're a simple lexical convenience, > I'm not sure what their harm is. You're right: at an XML-parsing level they're not too bad, but still only a rather minor convenience. The problem is that they add complexity without completely solving the problem - if you are writing an XML article about CDATA sections, for example, you can't use a literal ']]>'! > I'm not sure any level of DOM has a sane treatment of CDATA sections I'm with you here, it's the DOM that's the real problem. Aside from normalising text together being defeated by them, the issues with splitting CDATA sections for ']]>' and out-of-encoding characters in DOM3 are an extra annoyance and likely source of bugs for implementations. The legacy nonsense from DTDs is a much worse issue in my book: it turns XML from a simple, easy-to-grok-and-knock-up-a-noddy-parser-for notation into a maze of twisty little bugs, all alike. Manifesto for a cleaner XML more suited to simple tasks (ohmygod Microsoft want to put XML in the DNS argh etc.): - no doctypes DTD validation is underpowered, ineffective for namespaces, and does not deserve to be part of the basic required XML syntax. Validation should be done as a layer on top of XML (Schema, RNG), not as part of the basic required syntax. - no entity references most common use case: named character escapes: character references are almost as convenient and anyway you should be using an encoding that doesn't require you escape them. Further use case: inclusions: use XInclude or similar processing layer on top of XML. Entity references are not worth the *enormous* complexity they add to the DOM (if implemented completely, anyway) - no default attribute values how hard is it for an application to take null (or '') for an answer? - no CDATA sections at least at a DOM level - no attribute normalisation seems to be barely used, and confuses DOM a treat - xmlns: declarations on the root element only, unique URIs being able to reuse prefixes over the document for eg. inclusions is not worth the pain of namespace fixup and broken interaction between DOM1 and DOM2 methods any I missed? Been having a grim day tracking down obscure DOM bugs and interactions, hope everyone is having a fun weekend. I'll stop ranting now then. -- Andrew Clover mailto:and at doxdesk.com
https://mail.python.org/pipermail/xml-sig/2004-May/010277.html
CC-MAIN-2016-40
refinedweb
492
61.36
19 January 2007 16:19 [Source: ICIS news] By John Baker LONDON (ICIS news)--The impetus for businesses to "go green" is growing as the need to fight global warming becomes more widely accepted. But the impact on the chemicals industry, and more significantly, the polymers sector, are just beginning to become evident. This week the UK’s leading food and clothing retailer Marks & Spencer (M&S) announced with great panache and publicity that it had formulated a £200m ($390m, €300m), 100-point “eco-plan” that will see its operations become carbon neutral within five years. It will cut energy use by 25% and use more green renewable energy, resorting to carbon offsetting only as a last option. It will also use 50% biodiesel in all its lorries. Other targets by 2012 include: sending no waste to landfill, cutting packaging by 25% and the number of carrier bags it uses by 33%. Those bags which it use will be made from recycled material. M&S will phase out another 19 pesticides under the eco plan. It has already banned the use of 60 by its suppliers. It plans to eliminate all post-harvest pesticide use within five years and will launch a pesticide residue reduction network with its suppliers. M&S also revealed that it will make much of its polyester clothing from recycled plastic bottles and focus on just four polymers for packaging: corn-derived polylactic acid (PLA), polyethylene (PE), polypropylene (PP) and polyethylene terephthalate (PET). The impact is not hard to see. A 33% reduction by M&S in the number of carrier bags equates to no less than 200m units, and the use of recycled material points to another 400m loss for virgin polymer makers. M&S estimates the UK uses 8bn carrier bags a year, resulting in 100,000 tonnes of waste, and plans to launch a “No to bags” campaign. Its goal for a 25% reduction in packaging equates to 25,000 tonnes/year less of polymers, paper and other materials. M&S is already trialing staff uniforms and men’s fleeces made from polyester recycled from PET soft drinks bottles and reckons that just switching over to recyclate for its range of men's fleeces will use 22m 2-litre bottles, saving 6,000 bbl/year of crude oil from going into virgin polyester. It plans to make this switch by the end of the year, including women’s and children’s fleeces also. Then it will extend the recyclate use to other ranges such as trousers, suits and furniture fillings. “We believe a responsible business can be a profitable business,” said chief executive Stuart Rose who has revived the fortunes of the once-struggling retailer. “M&S will change beyond recognition... we will become carbon neutral and ensure none of our clothing or packaging needs to be thrown away,” he explained. It’s easy to imaging where all this is leading. Multiply the reduced packaging and polymer use at M&S to other ?xml:namespace> Is it possible that the M&S plan marks the beginning of a significant turning point in the use of polymers in the retailing sector? Where will that
http://www.icis.com/Articles/2007/01/19/1122564/insight-uk-retailers-eco-move-will-hit-polymers.html
CC-MAIN-2014-41
refinedweb
530
59.64
On Mon, 2004-06-07 at 00:45, Florin Andrei wrote: > When it was more difficult, it worked: months ago, i compiled and > installed Cyrus-IMAPd on FC1 and had no issues with it. > Now, when it's simple, it does not work. On FC2, i can't convince Cyrus > to work. I can create the accounts, but Evo does not read the email > that's delivered. Ok, i got it nailed down. These are the steps required to make it work: 0. Fix saslauthd Edit /etc/sysconfig/saslauthd and change MECH to "pam": MECH=pam Then (re)start saslauthd 1. Install the software [root weiqi florin]# yum install cyrus-imapd cyrus-imapd-utils (optionally cyrus-imapd-devel) 2. Edit config files In /etc/cyrus.conf i only commented out pop3 and pop3s, since i'm not going to use POP3 with Cyrus. In /etc/imapd.conf i added these lines at the end: unixhierarchysep: 1 altnamespace: 1 sieve_maxscriptsize: 320 The first line allows for Unix-style separators (/) instead of news-style (.). Also the folders are created a bit differently inside the Cyrus spool. Without the second line, all IMAP folders must be created inside Inbox by your mail client. That's weird, so i added the second line which allows to create new folders at the same level as Inbox. On my other Cyrus server, I had to increase the variable on the 3rd line (default is 32) to 320 because i have way too many folders and a lot of Sieve filter rules, so i was hitting the limits. Now verify there is no other IMAP server running, then start cyrus-imapd. 3. Change password to the "cyrus" account [root weiqi florin]# passwd cyrus Changing password for user cyrus. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. 4. Login as "cyrus" [root weiqi florin]# su - cyrus -bash-2.05b$$ whoami cyrus -bash-2.05b$ 5. As the "cyrus" user, create accounts with the cyradm tool Run "cyradm localhost" and provide the password of the account "cyrus". >From now on, you'll do a lot of things at the cyradm prompt. -bash-2.05b$ cyradm localhost IMAP Password: weiqi.home.local> weiqi.home.local> cm user/florin IMPORTANT: This is why it failed before! I used to do "cm user.florin" which is the default Cyrus way, and it failed. Once i did "cm user/florin" instead, it worked. Repeat by replacing "florin" with other account names. All authentication will be done against the Unix user database (IMAP password same as Unix password). That's not required by Cyrus, which can create its own user database; in fact, one could run a Cyrus server with no Unix accounts, just accounts in the Cyrus db; but for that, the auth must be changed from the default. Just for testing purposes, Unix auth is fine. Verify the account creation: weiqi.home.local> lm user/florin (\HasNoChildren) Set permissive ACLs for that account (see "man cyradm" for details): weiqi.home.local> setacl user/florin florin lrswipcd See ACLs that you just set: weiqi.home.local> lam user/florin florin lrswipcd On a production server you might need to restrict those ACLs. The ACL i indicated is almost (but not quite) equal to administrator privileges on that account. 6. Configure Postfix to deliver to Cyrus instead of delivering to /var/spool/mail Edit /etc/postfix/main.cf, look for the section containing mailbox_transport and add this line: mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp If this is your "production" server, then just for the duration of the tests comment out "inet_interfaces = all" and uncomment "inet_interfaces = localhost" so that mail coming in from outside is not delivered to your unborn-yet maybe-still-buggy Cyrus server. Restart Postfix. 7. Test [root weiqi root]# echo test | mail -s test florin localhost Watch the logs to see if it gets delivered properly. This is where it used to fail for me before. It kept on saying there's no such account ("550-Mailbox unknown"). Once i created the account with "/" instead of ".", the delivery succeeded. Go to /var/spool/imap and poke around and see if you can find the mail files. Cyrus stores each message in its own file, try and find them. Now hook-up an IMAP client to your server and see if you can access the mail. Try it out, create directories, move messages around, etc. 8. Go live Once all is ok, replace "inet_interfaces = localhost" with "inet_interfaces = all" then restart Postfix. 9. Future development One of the strengths of Cyrus is server-side filtering: you can tell it to filter email in folders regardless of the email client: sorting is performed by the server, not by the client. This is accomplished via Sieve. Install Horde/Ingo or another Sieve manager and create your own rules. Another trick: Create shared folders among users, either for collaboration or for other purposes (big unique spam trashcans that get polled by scripts feeding spam into SpamAssassin/sa-learn). That's it. -- Florin Andrei
http://www.redhat.com/archives/fedora-list/2004-June/msg02049.html
crawl-002
refinedweb
844
73.37
LackeyLackey A Graphical Python Automation SuiteA Graphical Python Automation Suite Developed by Jon Winsley Third Party Library RequirementsThird Party Library Requirements - numpy - pillow - opencv - keyboard IntroductionIntroduction Lackey is a Python implementation of Sikuli script, allowing you to run automation scripts developed in the Sikuli editor with pure Python. If you're trying to run Sikuli scripts in a Java-free environment or integrate them into an existing Python testing structure, this is the library for you. UsageUsage InstallationInstallation Installation is easy: pip install Lackey Then you can just import Lackey at the head of your Sikuli-script python file: from lackey import * WARNING Be aware that this will create global methods that will overwrite certain Python functions, such as type(). For more information, see the Sikuli Patching section below. GeneralGeneral The Lackey library is divided up into classes for finding and interacting with particular regions of the screen. Patterns are provided as bitmap files (supported formats include .bmp, .pbm, .ras, .jpg, .tiff, and .png). These patterns are compared to a Region of the screen, and, if they exist, can target a mouse move/click action. If you've used Sikuli, you'll feel right at home. Lackey is designed to be a drop-in shim for Sikuli. Sample code (note that you'll need to provide your own PNGs): from lackey import * click("Start_Button.png") wait("Control_Panel.png", 5) # Maybe the Start menu is slow click("Notepad.png") Working with Elevated PrivilegesWorking with Elevated Privileges In most cases, you won't need to run Lackey with elevated privileges. However, Windows will not let a non-elevated script send mouse/keyboard events to a program with elevated privileges (an installer running as administrator, for example). If you run into this problem, running Lackey as administrator (for example, by calling it from an Administrator-level Powershell instance) should solve your issue. DocumentationDocumentation Full API documentation can be found at ReadTheDocs. RationaleRationale In my line of work, I have a lot of tasks walking through line-of-business applications to do boring things that any computer could do. Laziness being the mother of invention, I decided to script what I could. I found SikuliX to be a tremendously valuable tool for the job, but its Java dependencies and limited Python coupling posed problems in several cases. So, I decided to create a pure Python implementation of Sikuli script. There are some existing libraries for this purpose, like pywinauto and autopy, but they didn't work for me for one reason or another. I wasn't doing a lot of Windows GUI interaction with these particular applications, so pywinauto's approach wouldn't help. I needed something that could search for and use images on screen. autopy was closer, but it had quite a few outstanding issues and hadn't been updated in a while. Most of my automation is in Windows, so I've begun this library with only Windows support. As of version 0.7.0, it also includes Mac OS X support,and it's designed to eventually be extended with support for Linux by implementing an additional "PlatformManager" class. I'll get around to this at some point, but if you'd like to contribute one sooner, please feel free! Sikuli PatchingSikuli Patching My goal with this project is to be able to reuse my existing library of Sikuli scripts with minimal modifications. To that end, Lackey will map certain functions of the screen region ( find(), click(), etc.) to the global scope. This means you can use the Sikuli IDE for development, and run the final product with pure Python! Add the following line to your Sikuli python script, and you should be able to run it in Python largely without issue: from lackey import * Note that I have had to adjust some of my image search similarity settings in a couple cases. Your mileage may vary. Please report any issues that you encounter and I'll try to get them patched. Be aware that some Sikuli-script methods actually overwrite Python-native functions, namely type() and input(). Where this is the case, I've remapped the native functions by adding a trailing underscore. They can be accessed as follows: from lackey import * username = input_("Enter your username: ") # built-in Python input StructureStructure Each platform (Windows/OSX/Linux) needs its own PlatformManager (see documentation above) to abstract OS-level functionality, like simulating mouse clicks or key presses. Ideally, these should be implemented with as few 3rd-party library dependencies as possible. If you'd like to contribute a PlatformManager for your OS, feel free to submit a pull request! Don't forget to update the unit tests and verify that they still run. Fair WarningFair Warning This library is currently under development, and may have some bugs. Check the Issues list to find features/bugs you can help with! Build InstructionsBuild Instructions To build the wheel from source, cd to the project directory and run: python setup.py bdist_wheel To link directly to the repository (if you want to work with the develop ring, for example), cd to the project directory and run: pip install -e ./ (Note that you may need to install the wheel package.) Special thanksSpecial thanks Debugging contributions:
https://libraries.io/pypi/Lackey
CC-MAIN-2021-10
refinedweb
871
53.31
Publish Schemas as a Web Service in BizTalk You use the BizTalk Web Services Publishing Wizard to publish schemas as a Web service. Publish schemas as a web service In Programs, select BizTalk Server, and then select BizTalk Web Services Publishing Wizard. Important You must build BizTalk projects prior to running the BizTalk Web Services Publishing Wizard. On the Welcome page, click Next. On the Create Web Service page, select Publish schemas as web services and then click Next. On the Web Service page, define the Web service(s) to publish. You use the tree in the Web service description dialog box to add, remove, rename, and edit the Web service description nodes. The Information dialog box provides information about the selected node and displays any errors in the current node or any sub nodes: The root node to the tree (Web service description) describes the Web service project name. The virtual directory name uses the root node as the default name. You can modify the Web service description by selecting Rename web service description. To add a new Web service, right-click the Web service description node, and then click Add web service. This creates a new Web service without any Web methods. To modify the name of the Web service, right-click the Web service node, and select Rename web service,and then press Enter to accept the new name. To add a new Web method, right-click the Web service node, point to Add Web Method, and then click One-way (for a request Web method) or Request-response (for a request-response Web method) from the shortcut menu. To set the request and response schema types, right-click the Request or Response node, and then click Select schema type. In the Request Message Type dialog box, type the name of the assembly containing the document schema in the BizTalk assembly file text box or click Browse to search for the assembly. The Available schema types list view displays each root element of the schema. Select a root node to add as the request or response schema type. Note If you installed the BizTalk assembly file into the Global Assembly Cache (GAC), make sure that the assembly in the GAC has been updated with the assembly that you will select in the Request Message Type dialog box. If the GAC has the same fully qualified name, the BizTalk Web Services Publishing Wizard uses the assembly file in the GAC instead of the one you selected. You can rename the Request and Response nodes without affecting the generated code. After defining your schemas, you can rename the part elements, which modifies the Web method parameter name. You can see the changes by viewing the generated Web service code. Note You cannot use spaces when renaming any of the Web service description nodes. Click Next to continue the wizard. On the Web Service Properties page, in the Target namespace of web service dialog box, type a target namespace for the Web service, and select the appropriate boxes to specify how the wizard should handle SOAP headers and Single Sign-On support for the Web service. If you want to further customize the Web service implementation, click Advanced button. It will show more available options: Note The selection any of the SOAP header options are applied globally to all Web services and Web methods created when running this instance of the wizard. On the Web Service Properties page, click Next. If you selected Add additional SOAP headers, the Request SOAP Headers and Response SOAP Headers pages appear. You can add and remove request and response SOAP headers using the Add and Remove buttons in the following dialog boxes: To add a SOAP header, click Add. In the BizTalk assembly name (*.dll) text box, type the assembly name or browse for the assembly containing the SOAP Header schema in the BizTalk assembly file text box. The Available schema types list view displays each root element of the schema. Select a root node to add as a request or response SOAP header. To select multiple items, hold the CTRL key and click OK. To remove a SOAP header from the list, select it from the list of added SOAP headers, and then click Remove. Click Next on each SOAP header page to continue the wizard. Note The target namespace and root element name define the SOAP header. Note If the same combination of target namespace/root element name is added as a request and response SOAP header, it will not be treated as an in/out header. You must manually copy the incoming header to the outgoing header inside of an orchestration. Note The same combination of target namespace/root element name can only be added once as a request SOAP header and once as a response SOAP header. On the Web Service Project page, in the Project location text box, type the project location. You can accept the default location (), type a location for the project, or click Browse and select a Web directory. Select any of the following options: Overwrite existing project. This option is only available if the project location already exists. You will only be able to publish to the same location if you select this option. Otherwise, you must enter a different project location. Allow anonymous access to web service. This option adds anonymous access to the created virtual directory. By default, the virtual directory inherits the access privileges from its parent virtual directory or the Web site (if it is a top-level virtual directory). Create BizTalk receive locations. This option automatically creates the SOAP adapter receive ports and locations that correspond to each generated .asmx file. If another receive location already exists, the receive location is not be replaced. Receive locations for the SOAP adapter are resolved using the format "/<virtual directory name>/<orchestration namespace_typename_portname>.asmx". After selecting this option, choose the application where the receive ports and locations will be generated. Note The project location can exist on a different server. To publish a Web service to a different server, type the project name as http://<servername>/<project_name>. Note The project location can exist on a non-default Web site. When publishing to a non-default Web site, include the port number of the Web site in the URL:<project_name>. Note When you use the wizard to create receive locations, the wizard creates the receive locations using many default values. The default values for receive and send pipelines are Microsoft.BizTalk.DefaultPipelines.PassThruReceive and Microsoft.BizTalk.DefaultPipelines.PassThruTransmit. If messages received through the published Web service require any special pipeline processing (for example, validation, correlation, or inbound/outbound maps), then you should set the send and receive pipelines to Microsoft.BizTalk.DefaultPipelines.XMLReceive, Microsoft.BizTalk.DefaultPipelines.XMLSend, or to a custom pipeline. Click Next to review your settings for the ASP.NET Web service project. Click Create to create the ASP.NET Web service. Click Finish to complete the BizTalk Web Services Publishing Wizard. See Also Publishing Schemas as a Web Service
https://docs.microsoft.com/en-us/biztalk/core/publish-schemas-as-web-services-with-biztalk-web-services-publishing-wizard?redirectedfrom=MSDN
CC-MAIN-2019-09
refinedweb
1,178
63.59
Re: [thejournal-users] Now Available - The Journal 6 Build #609 Expand Messages - David: I have bought several copies of The Journal, but not in the past year. Do you have a coupon code for the discounted upgrade? Thanks, Patricia Harris. From: David Michael Sent: Monday, August 06, 2012 9:53 AM To: thejournal-users@yahoogroups.com Subject: [thejournal-users] Now Available - The Journal 6 Build #609 The Journal 6 Build #609 (6.0.0.609) is now available: The Journal 6 Build #609 is a bug-fix release. Upgrading is recommended. Bugs Fixed * The Journal is not remembering its maximized window state. * If The Journal's system database is closed when resuming from a sleep/suspend, close The Journal. * Dragging and dropping an image file into a table cell is sometimes causing an error. * When printing, images in a table cell aren't using the table cell dimensions to limit their resize, they're using the whole page width. * When editing styles, applying the overrides to the sample text is causing confusion. * Possible issues pasting RTF from clipboard. * Double-clicking to toggle a large image resize-to-fit isn't marking the entry as modified. * When printing, images that have been resized by the user aren't using their adjusted size. * Importing html/xml not handling ' correctly. * Links in pasted text are not properly merging active font and link style. * When pasting HTML from Paste Special..., not using category/user default font. * Re-worded User Preferences, Editor option to "When pasting, Rich Text Format has priority over HTML Format" * User Preferences, Editor option "Insert URL when pasting from Web pages" feature isn't working. * Getting gibberish when trying to paste into the timer. * In "Export Entries as Document", when exporting each entry to its own document, if there is more than one entry for a date/time, only the last entry for that date/time is being exported. * Reminders aren't being searched by Search Entries. * Image entry and external object entry titles aren't being indexed for Search Entries. * Missing hot-keys for Insert Media File (M), Insert CheckBox (X), and Insert Symbol (Y). * Attempting to uninstall an add-on with Tools | Add-ons... causes an error. * Importing TIFF, TGA and a few other image formats is not working. * When importing from Evernote, spaces are getting lost between formatted text. * Additional fixes to importing HTML and XML. * "Lock All Password-Protected Categories" not properly locking sub-categories. * When upgrading a TJ5 Journal Volume, sometimes incorrectly detecting that TJ5 is running. New Features * When importing from Evernote, you can now choose to import into any category (calendar as well as loose-leaf). * When importing from Evernote or Penzu, the category you choose to import into is remembered and used as the default on the next import. * Import to-do items from Evernote as checkboxes. To see your current version of The Journal, click on the Help menu and choose "About The Journal". The version number is displayed just below the copyright notice. Upgrading from The Journal 6 (any previous build) Click on the "File" menu, "Maintenance" sub-menu and choose "Backup The Journal..." to create a backup *before* you install the update. Then download the current version of The Journal 6 available from the web page (URL above) and install it over your current version. Or you can click on the Help menu and choose "Check for Update of The Journal". NOTE: Please close The Journal before installing the update. The Journal cannot be running when you install the update. Upgrading from The Journal 5 or earlier NOTE: Do NOT uninstall the previous version! (At least, not until you're satisfied with the upgrade. Then you can uninstall the previous version.) Download and install The Journal 6. If you have The Journal installed in the default location, you will be prompted by The Journal 6 about upgrading your TJ5 Journal Volumes. You will need to close The Journal 5 when you upgrade your Journal Volumes. After that, though, you can run both The Journal 6 and The Journal 5 without any issues. People who have purchased any previous version of The Journal are eligible for special upgrade pricing: * If you purchased The Journal 5 anytime in 2012, you are eligible for a FREE upgrade to The Journal 6. Send an email to mailto:support%40davidrm.com to request your free Registration Keys for The Journal 6. * If you have purchased any previous version of The Journal (v1-v5), you can upgrade to The Journal 6 for only $24.95 ($25 off). As always, if you have any questions about or suggestions for The Journal, please don't hesitate to contact me: mailto:support%40davidrm.com -David David Michael mailto:support%40davidrm.com [Non-text portions of this message have been removed] Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/thejournal-users/conversations/topics/10945
CC-MAIN-2016-07
refinedweb
814
58.08
28 June 2011 16:07 [Source: ICIS news] LONDON (ICIS)--BASF has restarted butanediol (BDO) production at its site in ?xml:namespace> BASF was able to restart parts of its feedstock acetylene plant and has since been able to restart production of BDO, albeit at minimum levels, a company source said. As such, it has been able to increase its allocation quota. “We have increased our allocation to 45% from the initial level of 33%,” the source said. Following a fire at a precursor plant at the facility on 30 May, the 190,000 tonne/year BDO producer declared force majeure, which it said would potentially last until the end of August. “We can run parts of the acetylene plant, which allows BDO production to operate at low rates,” the source said. “We will keep the force majeure because there is no change in the general situation, and it might have to extend into September because of damage to the acetylene plant.” Buyers are able to obtain the majority of their contracted material. However, there are now growing concerns regarding supply in July and August. Suppliers are not able to satisfy demand and sources say that increases of up to €200/tonne ($286/tonne) are likely in the third quarter of the year. Offtake from the downstream polybutylene terephthalate (PBT) sector is particularly strong and buoyed by demand from the automotive industry, where it is used to manufacture most electronic components. “Demand for PBT is really high above pre-crisis level,” a PBT producer said. A buyer from the PBT market said: “Our offtake is 15% up from last year.” Availability in the derivative market is constrained as BASF’s force majeure on PBT is still in place, owing to a lack of feedstock BDO. “The market is tight and our customers are on allocation, but we are trying to resolve the situation as soon as possible, a source from the company said. “We hope to be running as normal by early autumn.” BDO is used industrially as a solvent and in the manufacture of some types of plastics, elastic fibres and polyurethane (PU). PBT is a thermoplastic engineering polymer that is used as an insulator in the electrical and electronics industries. ($1 = €0.70) For more on BDO,
http://www.icis.com/Articles/2011/06/28/9473338/basf-runs-ludwigshafen-bdo-at-minimum-levels-but-fm-still-in-place.html
CC-MAIN-2014-49
refinedweb
379
60.24
This Week in Neo4j: Building the Procedure Compiler and Using Elastic This Week in Neo4j: Building the Procedure Compiler and Using Elastic It's been a busy week for Neo4j users, including an insightful look at building the Procedure Compiler and how to use Elastic and Neo4j to make sense of Airbnb data. Join the DZone community and get the full member experience.Join For Free Want to free your enterprise from the costs of provisioning servers? Learn about serverless architecture with Couchbase and cloud computing. Welcome to the second edition of This Week in Neo4j! If you’ve got any ideas for things we should cover in future editions, I’m @markhneedham on Twitter, or send an email to . Contributing to Neo4j: Florent Biville Florent Biville – Author of the Neo4j Procedure Compiler Long-time community member Florent Bivilledescribed his experience building the Neo4j Procedure Compiler which shipped with Neo4j 3.1.0. Florent picked up user-defined procedures just after their release in May 2016 and realised that you couldn’t find common errors until you deployed the procedure which made for a slow feedback cycle. He wanted to address this and collaborated with Tobias Lindaaker from the Neo4j engineering team to build the Procedure Compiler. I asked Florent if he had any tips for other people who are interested in contributing either to the main Neo4j code base or one of the surrounding projects such as APOC. These were Florent’s top tips: Neo4j✔@neo4jNeo4j✔@neo4j .@fbiville shows you how to check your user-defined procedures & functions in #Neo4j w/ his new Procedure Compiler: pic.twitter.com/iDEsyQZ0lQ Elizabeth M. Kallman@elizmkallmanElizabeth M. Kallman@elizmkallman @neo4j@fbiville Love it, thanks! 4,000 Slack Users! Achievement unlocked: 4,000 Slack users Speaking of Slack – this week, we had our 4,000th member of the community registered on the Neo4j-Users Slack, getting questions answered and helping others with their Neo4j journey. Since August 2015, there have been more than 250,000 messages posted, of which just under 100,000 were on public channels. If you’re stuck with a Cypher query or need help importing your data be sure to drop by and ask for help. Join the Neo4j-Users Slack here Making Sense of Airbnb’s Data using Neo4j and Elastic Late last week Christophe Willemsen showed me a really cool talk where John Rodley and Chris Williams from Airbnb explain how they built a data portal using Neo4j to help make their internal data more searchable, discoverable, and consumable. In the talk they explain how they’ve used the power of Neo4j and Elastic to make it easier to find the information you’re looking for. The GraphAware neo4j-to-elasticsearch and graph-aided-search libraries were used to glue the two technologies together. The talk is part of the Airbnb Tech Talk series and the slides for the talk are also available. There’s some really well-designed slides in the talk so it’s well worth taking a look. Kafka, Neo4j PHP OGM, Twitter API, and More - It wasn’t strictly published this week, but Michael Moore’s excellent blog post in which he shows how to stream data into Neo4j from Kafka made a reappearance. There’s lots of Python awesomeness to be found in this post and all the code is available as a GitHub gist. - Christophe Willemsen released 1.0.0-RC1 of neo4j-php-ogm – a Neo4j Object Graph Mapper for PHP. - Seguy Damien released v0.10.3 of exacat, a tool for doing code analysis of PHP projects. Six new analyzers were introduced in this release including one that finds properties only used once and one that finds uses of the same alias to refer to different namespaces. - Michael Hunger and I showed how to create a Twitter graph of the ‘My name is…I work’ meme, that criticized the practice of hard coding tasks during interviews. The post mostly focuses on loading JSON into Neo4j from the Twitter API using APOC, with not a programming language in sight. - Michael also wrote a blog post containing his top tips for updating graphs efficiently. My favourite Cypher clause – UNWIND – features extensively. - Last week, we featured Brian Roy’s explorations with the AWS Rekognition API. This week he’s gone even further and deployed his application on a Raspberry Pi to check how well his person-detection model works. Neo4j is then used to analyse the results. - Fred Trotter of Hacking Healthcare and DocGraph fame published in which he laments the loss of discipline around data that NoSQL encourages. He then concludes that relational and graph databases force you to think about your data structures upfront rather than just “throwing the data on the pile”. Meetups on R, Ruby on Rails, and more There were lots of meetups and slide decks floating around the Twittersphere this week. Samathy Barratt presenting at the Women in Tech, Nottingham group - Starting with the most recent, on Thursday Samathy Barratt presented an Intro to Neo4j at the Women in Tech, Nottingham group. - Geoffrey Hannigan, Postdoctoral Research Fellow at the University of Michigan, shared his presentation Network Analysis with R and Neo4j from the Ann Arbor R Group. He starts with a lighthearted look at using igraph and RNeo4j to calculate centrality in a movies dataset before showing how the same tools can be used to detect cancer. If you want to learn more about using Neo4j with R Nicole White has lots of material on her blog - Regina Imhoff presented a talk in which she showed how to use Neo4j with Ruby on Rails at the Austin on Rails user group. - In Manchester, the metafused team (Matthew Yeager, Matt Jackson and Phil Stanley) showed how to build a recommendation engine using Neo4j and deployed on the Google Cloud Platform. You can download the slides from their talk. - At our Meetup at the SoundCloud offices in Berlin invited by one of their data scientists, Sean Braithwaite, Michael presented the latest edition of the TrumpWorld Graph. He demonstrated how to extend the original dataset with a variety of additional sources. (Slides) - Our Online Meetup this week was presented by Jesús Barrasa, with whom I discussed the difference between RDF and Property Graphs. - Our upcoming Online Meetup will feature Johannes Unterstein from Mesosphere demonstrating how he built the Neo4j package for Marathon Universe and how to install, scale and run your Neo4j 3.1 Cluster on DC/OS. Please note that the time displayed on Meetup.com is in Pacific Time, we list the correct local times in the event description. A Closing Tweet from Malta We’ll close with a tweet from Niall in Malta who’s having fun using Neo4j to analyze social networks: Niall@nogribin getting addicted to social data analysis with #neo4j. ...any excuse to play with data / code, and learn new stuff. Graph databases rock.· Malta Have a good weekend! Learn how the world’s first Engagement Database can lower your costs and simplify deployments with Couchbase in the cloud. Published at DZone with permission of Mark Needham , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/this-week-in-neo4j-building-the-procedure-compiler
CC-MAIN-2018-13
refinedweb
1,211
60.85
Utility class to load blob data from a JSON stream. More... #include <GA_ATIBlob.h> Utility class to load blob data from a JSON stream. Since blobs are opaque to the blob container, it's impossible to create blobs when loading. If blobs need to be saved/loaded, a blob loader class will have to be used to create blob data from the stream. This class should load the data stored by the BlobData::jsonSave() method. Definition at line 56 of file GA_ATIBlob.h. Definition at line 59 of file GA_ATIBlob.h. Load the data from the JSON parser and assign the blob to the handle passed in. If there's an error loading the blob, then, return false.
https://www.sidefx.com/docs/hdk/class_g_a___blob_data_loader.html
CC-MAIN-2021-17
refinedweb
118
75.91
.NET framework <language,) ] Notes:pair (typically .NET developer signing the assembly) can sign assemblies that have the same strong name as a prior version assembly, since the creator possesses the private key. Strong naming is required to add assemblies to Global Assembly Cache. ). System.* Microsoft.* BCL includes a small subset of the entire class library and is the core set of classes that serve as the basic API of CLR.[38]. mscorlib.dll System.dll System.Core.dll).[39].[40] System.Runtime.dll NuGet is the package manager for all .NET platforms. It is used to retrieve third-party libraries into a .NET project with a global library feed at NuGet.org.[41] Private feeds can be maintained separately, e.g., by a build server or a file system directory..[42] Such assemblies are also difficult to reverse engineer, since .NET decompilers such as .NET Reflector reveal only the managed code.. System.Runtime.InteropServices System.EnterpriseServices . While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be cross-platform,[43] and implementations are available for other operating systems (see Silverlight and § Alternative implementations). Microsoft submitted the specifications for CLI (which includes the core class libraries, CTS, and CIL),[44][45][46] C#,[47] and C++/CLI[48] to both Ecma International (ECMA) and International Organization for Standardization (ISO), making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms. .[49][50] ..[52].[52] This is the mark phase.[53] Since the memory held by garbage is of no consequence, it is considered free space. However, this leaves chunks of free space between objects which were initially contiguous. The objects are then compacted together to make free space on the managed heap contiguous again.[52][53] Any reference to an object invalidated by moving the object is updated by GC to reflect the new location.[53] The application is resumed after garbage collection ends. The latest version of .NET framework uses concurrent garbage collection along with user code, making pauses unnoticeable, because it is done in the background.[54] GC used by .NET Framework is also generational.[55] Objects are assigned a generation. Newly created objects are tagged Generation 0. Objects that survive a garbage collection are tagged Generation 1. Generation 1 objects that survive another collection are Generation 2. The framework uses up to Generation 2 objects.[55] Higher generation objects are garbage collected less often than lower generation objects. This raises the efficiency of garbage collection, as older objects tend to have longer lifetimes than newer objects.[55] Thus, by eliminating older (and thus more likely to survive a collection) objects from the scope of a collection run, fewer objects need checking and compacting.[55] When an application is first launched, the .NET Framework compiles the CIL code into executable code using its just-in-time compiler, and caches the executable program into the .NET Native Image Cache.[56][57] Due to caching, the application launches faster for subsequent launches, although the first launch is usually slower. To speed up the first launch, developers may use the Native Image Generator utility to manually ahead-of-time compile and cache any .NET application.."[58] .NET Framework provides support for calling Streaming SIMD Extensions (SSE) via managed code from April 2014 in Visual Studio 2013 Update 2. However, Mono has provided support for SIMD Extensions as of version 2.2 within the Mono.Simd namespace in 2009.[59] Mono's lead developer Miguel de Icaza has expressed hope that this SIMD support will be adopted by CLR's ECMA standard.[60] Streaming SIMD Extensions have been available in x86 CPUs since the introduction of the Pentium III. Some other architectures such as ARM and MIPS also have SIMD extensions. In case the CPU lacks support for those extensions, the instructions are simulated in software.[citation needed] ."[61] a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR, the virtual machine that manages the execution of .NET programs. CoreCLR comes with an improved just-in-time compiler, called RyuJIT.[64][c] .NET Core also includes CoreFX, which is a partial fork of FCL.[66] While .NET Core shares a subset of .NET Framework APIs, it comes with its own API that is not part of .NET Framework.[67] Further, .NET Core contains CoreRT, the .NET Native runtime optimized to be integrated into AOT compiled native binaries. A variant of the .NET Core library is used for UWP.[68] .NET Core's command-line interface offers an execution entry point for operating systems and provides developer services like compilation and package management.[69] .NET Core supports four cross-platform scenarios: ASP.NET Core web apps, command-line apps, libraries, and Universal Windows Platform apps. It does not implement Windows Forms or WPF which render the standard GUI for desktop software on Windows.[67][70] .NET Core is also modular, meaning that instead of assemblies, developers work with NuGet packages. Unlike .NET Framework, which is serviced using Windows Update, .NET Core relies on its package manager to receive updates.[67][70] .NET Core 1.0 was released on 27 June 2016,[71] along with Visual Studio 2015 Update 3, which enables .NET Core development.[72] .NET Core 1.0.4 and .NET Core 1.1.1 were released along with .NET Core Tools 1.0 and Visual Studio 2017 on 7 March 2017.[73] Microsoft managed code frameworks and their components are licensed as follows: Visual Studio .NET 2002 shipped with the Microsoft .NET Framework SDK version 1.0. Visual Studio .NET 2003 ships with .NET Framework SDK version 1.1.
http://enc.tfode.com/.NET_Framework
CC-MAIN-2017-34
refinedweb
968
51.14
How to install new packages for pyramid without getting a pkg_resources.DistributionNotFound: once a project has been created I have installed pyramid and successfully created a project, but when I try to add new packages to the setup.py requirements they always give me a pkg_resources.DistributionNotFound error. The packages are installed, and this only happens if I try to install new packages after I run ../bin/python3.3 setup.py develop It doesn’t matter what packages it is. The only way I have solved (not really), is setting up a new virtual environment and installing the packages before I create the project and run setup.py develop. Obviously I’m doing something wrong. Is there anything I need to do beside pip install the package? Is this some kind of pathing issue? I’m new to this so your help would be so very appreciated! *Adding my installation process in case anyone happens to see something wrong with it. Also including my wsgi file. Created a virtualenv easy_install-3.3 env Activated the virtualenv source env/bin/activate Installed pyramid cd env ./bin/easy_install-3.3 pyramid Created a project ./bin/pcreate -s starter myprojectname Ran setup.py cd myprojectname ../bin/python3.3 setup.py develop At this point I get the following error: pkg_resources.DistributionNotFound: waitress Installed Waitress ../bin/easy_install-3.3 waitress Ran setup.py again (not sure if I should be doing this) ../bin/python3.3 setup.py develop Still see the error My .wsgi file contains the following (not sure if this is important to this question or not): activate_this = "/home/account/env/bin/activate_this.py" execfile(activate_this,dict(__file__=activate_this)) import os import sys path = '/home/account/env/lib/python3.3/site-packages' if path not in sys.path: sys.path.append(path) from pyramid.paster import get_app application = get_app('/home/account/env/myprojectname/production.ini', 'main') Best answer pip and setup.py develop should not to be mixed. The latter uses easy_install which is not compatible with pip in the case of namespace packages (these are packages that are installed as subpackages of another parent, such as zope.sqlalchemy installing only the .sqlalchemy part of the full zope.* package). Namespace packages _will_ cause problems between pip and easy_install. On the other hand, most other packages will work fine with whatever system you choose but it’s better for you to be consistent. Another thing to double check is that you are actually installing the packages into your virtualenv. You should be able to open the python cli in your virtualenv and import the package. If you can’t, then it’s probably not installed.
https://pythonquestion.com/post/how-to-install-new-packages-for-pyramid-without-getting-a-pkg-resources-distributionnotfound-once-a-project-has-been-created/
CC-MAIN-2020-16
refinedweb
442
52.56
On Fri, Oct 22, 2010 at 11:36 AM, Sigbjorn Finne <sigbjorn.finne at gmail.com> wrote: > > > On Fri, Oct 22, 2010 at 9:35 AM, Sittampalam, Ganesh > <ganesh.sittampalam at credit-suisse.com> wrote: >> >> Bit Connor wrote: >> > On Sat, Oct 16, 2010 at 10:29 AM, Claus Reinke >> > <claus.reinke at talk21.com> wrote: >> >>> After it catches this error, the function returns (line 376): >> >>> >> >>> return (fail (show e)) >> >>> >> >>> The "fail" is running in the Either monad (The Result type = >> >>> Either). This calls the default Monad implementation of fail, which >> >>> is just a call to plain old error. This basically causes the entire >> >>> program to crash. >> >> >> >>> Actually, it appears that simpleHTTP isn't actually supposed to >> >>> throw an IOException, and it is instead supposed to return a >> >>> ConnError result. So the real fix is to fix the code to make this >> >>> happen. But >> >> >> >> Sounds like a victim of >> >> >> >> >> >> For mtl clients, 'fail' for 'Either' used to call 'Left'. That was >> >> changed, though the ticket does not indicate the library versions >> >> affected. >> > >> > This looks like the problem. Any idea how to get the HTTP package >> > fixed? I could try making a patch myself, but I would prefer hearing >> > from the HTTP maintainer first, who doesn't seem to be around. >> >> >> I :) I know it may not be appropriate as-is due to its dependency list, but http-enumerator[1] may be something to consider. It's still young, but I think the API is pretty clean, and I've heard from users that it performs favorably versus HTTP. Michael [1]
http://www.haskell.org/pipermail/haskell-cafe/2010-October/085308.html
CC-MAIN-2014-10
refinedweb
259
74.79
Well, today is the final day of 2007. As a famous blogger I know it is important for me to say a few words. For posterity and all. There, I feel better now. Happy New Year. Well, today is the final day of 2007. As a famous blogger I know it is important for me to say a few words. For posterity and all. There, I feel better now. Happy New Year. So, if you ever tried to figure out what a Java program that appears to be hung is doing you are probably very familiar with the Java thread dump feature. Basically you send a signal to the JVM which responds by, essentially, writing a stack trace of each thread in the JVM to the standard output device. In fact, a thread dump contains more useful information that just a stack trace, it also show the state of each thread (i.e. runnable, waiting, etc) and which Java monitors (synchronized locks) are owned and/or being waited on. Here is a sample snippet of a thread dump: Full thread dump Java HotSpot(TM) Client VM (1.5.0_07-b03 mixed mode): "Timer-5" daemon prio=10 tid=0x092d5720 nid=0x73 in Object.wait() [0x9b52f000..0x9b52fd38] at java.lang.Object.wait(Native Method) - waiting on <0xa2a4b978> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:509) - locked <0xa2a4b978> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Timer-4" prio=10 tid=0x0925d418 nid=0x72 in Object.wait() [0x9b4ed000..0x9b4edab8] at java.lang.Object.wait(Native Method) - waiting on <0xa2a49570> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:509) - locked <0xa2a49570> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) As you can see a thread dump contains a lot of very useful information. The method used to “request” a thread dump is to send a signal to the running JVM. In Unix this is the SIGQUIT signal which may be generated via either: kill -3 <pid> or kill -QUIT <pid> where <pid> is the process id the the JVM. You can also enter Ctrl-\ in the window where the JVM is running. On Windows a thread dump is requested by sending a Ctrl-Break to the JVM process. This is pretty simple for foreground windows but requires a program (akin to Unix kill) to be used for JVMs running in the background (i.e. services). The problem with requesting a thread dump is that it requires manual intervention, i.e. someone has to enter the kill command or press the Ctrl-Break keys to generate a thread dump. If you are having problems with your production site in the wee hours of the morning your support staff probably won’t appreciate getting out of bed to capture a few dumps for you. In addition, a single thread dump is not as useful as a series of dumps taken over a period of time. With a single dump you only get a snapshot of what is happening. You might see a thread holding a monitor that is causing others thread to block but you have no idea how long that condition has existed. The lock might have been released a millisecond after the dump was taken. If you have, say, 5 dumps taken over 20 minutes and the same thread is holding the monitor in all of them then you know you’ve got a problem to investigate. The solution I’m going to propose makes use of JNI to request a thread dump of the current JVM and capture that output to a file which may be time stamped. This allows dump output to be segregated from everything else the JVM is sending to STDOUT. Before you invest any more time in this article let me state that the solution I’m going to present here only partially works for windows. It is possible to programmatically request a thread dump under Windows but due to a limitation in Win32, the Microsoft C runtime, or both the capture to a separate file does not work. Even though Win32 provides APIs for changing the file handles used for STDOUT/STDERR, changing them after a process has started executing does not seem to make any difference. If you do all your Java work on Windows, you’ve been warned – don’t read to the end and then send me a nasty email saying I wasted your time! Ok, the first thing we need to do is create a Java class that will serve as an interface to our native routine that captures thread dumps: package com.utils.threaddump; public class ThreadDumpUtil { public static int performThreadDump(final String fileName) { return(threadDumpJvm(fileName)); } private static native int threadDumpJvm(final String fileName); static { System.loadLibrary("libthreaddump"); } } This class loads a native library called libthreaddump when it is loaded and then exposes a static method to request a thread dump from Java code specifying the name of the file that should contain the captured dump. Running this file through the javah tool generates a C header named com_utils_threaddump_ThreadDumpUtil.h which is used to help build our native routine. The C code for the Unix variant follows: #include <signal.h> #include <unistd.h> #include <fcntl.h> #include <sys/stat.h> #include <string.h> #include <errno.h> #include "com) { /* get my process id */ pid_t pid = getpid(); /* open the file where we want the thread dump written */ char* fName = (char*) (*env)->GetStringUTFChars(env, fileName, NULL); if (NULL == fName) { printf("threadDumpJvm: Out of memory converting filename"); return((jint) -1L); } int fd = open(fName, O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); if (-1 == fd) { printf("threadDumpJvm: Open of file %s failed: %d[%s]\n", fName, errno, strerror(errno)); (*env)->ReleaseStringUTFChars(env, fileName, fName); return((jint) -2L); } /* redirect stdout and stderr to our thread dump file */ int fdOut = dup(FILE_STDOUT); int fdErr = dup(FILE_STDERR); dup2(fd, FILE_STDOUT); dup2(fd, FILE_STDERR); close(fd); (*env)->ReleaseStringUTFChars(env, fileName, fName); /* send signal requesting JVM to perform a thread dump */ kill(pid, SIGQUIT); /* this is kind of hokey but we have to wait for the dump to complete - 10 secs should be ok */ sleep(10); /* replace the original stdout and stderr file handles */ dup2(fdOut, FILE_STDOUT); dup2(fdErr, FILE_STDERR); return((jint) 0L); } Following are the compile command lines I’ve used on a couple of Unix systems to build this dynamic library: Mac OSX: gcc -o liblibthreaddump.dylib -dynamiclib -I. -I$JAVA_HOME/include -L/usr/lib -lc libthreaddump_unix.c Solaris: gcc -o liblibthreaddump.so -G -I/$JAVA_HOME/include -I/$JAVA_HOME/include/solaris libthreaddump_unix.c -lc Here is the C code for the Windows version of the native library: #define WIN32_LEAN_AND_MEAN #include <windows.h> #include "com_nm) { auto HANDLE fd; auto HANDLE fdOut; auto HANDLE fdErr; auto long retValue = 0L; auto char* errText = ""; auto DWORD pid = GetCurrentProcessId(); /* open the file where we want the thread dump written */ char* fName = (char*) (*env)->GetStringUTFChars(env, fileName, NULL); if (NULL == fName) { printf("threadDumpJvm: Out of memory converting filename"); return((jint) -1L); } fd = CreateFile((LPCTSTR) fName, GENERIC_WRITE, FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (INVALID_HANDLE_VALUE == fd) { printf("threadDumpJvm: Open of file %s failed: %ld\n", fName, (long) GetLastError()); (*env)->ReleaseStringUTFChars(env, fileName, fName); return((jint) -2L); } /* redirect stdout and stderr to our thread dump file */ fdOut = GetStdHandle(STD_OUTPUT_HANDLE); fdErr = GetStdHandle(STD_ERROR_HANDLE); printf("fdOut=%ld fdErr=%ld\n", (long) GetStdHandle(STD_OUTPUT_HANDLE), (long) GetStdHandle(STD_ERROR_HANDLE)); if (!SetStdHandle(STD_OUTPUT_HANDLE, fd)) printf("SetStdHandle failed: %ld\n", (long) GetLastError()); SetStdHandle(STD_ERROR_HANDLE, fd); printf("fdOut=%ld fdErr=%ld\n", (long) GetStdHandle(STD_OUTPUT_HANDLE), (long) GetStdHandle(STD_ERROR_HANDLE)); if (0 == GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, 0)) // pid fails here???? { retValue = (long) GetLastError(); errText = "Generate CTRL-BREAK event failed"; } else { /* this is kind of hokey but we have to wait for the dump to complete - 10 secs should be ok */ Sleep(10000L); } printf("This is a test message\n"); /* replace the original stdout and stderr file handles */ SetStdHandle(STD_OUTPUT_HANDLE, fdOut); SetStdHandle(STD_ERROR_HANDLE, fdErr); CloseHandle(fd); (*env)->ReleaseStringUTFChars(env, fileName, fName); if (0L != retValue) { printf("threadDumpJvm: Error generating thread dump: %s\n", errText); } return((jint) retValue); } Remember – the file capture will not work here, it simply creates an empty file and the thread dump goes to the original STDOUT device. Here is the command I used to create a Windows DLL using Microsoft Visual C++ 6.0: cl -I. -I%JAVA_HOME%\include -I%JAVA_HOME%/include/win32 -LD libthreaddump_win32.c -Felibthreaddump.dll That’s it. All the tools needed to request a thread dump any time you like. I used these tools to diagnose problems with an ATG application cluster to research problems being reported by the ATG ServerMonitor component. The ATG ServerMonitor issues warning and error log message for various reasons like the JVM being low on memory or a application request thread executing for an extended period of time. In a future post I’ll discuss how I extended the ATG ServerMonitor to capture thread dumps under these conditions. I just checked out the Amazon site for the best selling Kindle edition computer books. Eight of the top ten are “for Dummies” books. WTF?. Ok, as Rod Serling used to say, “Submitted for your consideration”. So, consider the shocker site (visual instructions below): And then check out our fearless leader Gettin’ Down! Nuff said. Of late I’ve been listening to some Colbie Caillat and I have to tell you, this girl has talent. Oh sure, she’s cute and all but close your eyes and listen to her sing. Not bad, eh? Colbie just released her first studio album, Coco, which in some parts of Texas means horse. That’s probably not why she picked that title but, it might be. Give a listen and see what you think.
https://bwithers.wordpress.com/2007/12/
CC-MAIN-2017-43
refinedweb
1,614
62.98
If you are using the Infragistics xamDockManager control and using MVVM to compose your views, then you have probably asked yourself the question, “How do I data bind a collection of objects, which represent my Views, from my ViewModel to various areas of the xamDockManager?”. You are asking yourself this question because the xamDockManager doesn’t support this data binding out of the box. The good news is that this is relatively easy to accomplish. You just have to write a little code. As with most solutions, there are always more than one way to solve a problem. There are actually solutions to this specific problem that have already been posted. For example, this post titled “ItemsSource for xamDockManager”, provides an alternative to the solution I am going to show you. So why am I showing you another approach? Well, I prefer something a little more simple and straight forward. You can choose which solution you prefer. In this post, I will be adding the required MVVM support to the WPF version of the xamDockManager, and we will be using a Behavior to do it. My Behavior is going to target a TabGroupPane as my container of choice. What I mean by “container of choice”, is that I am going to have all my Views data bound and hosted inside of a TabGroupPane. You may choose something different, such as adding support to the xamDockManager directly, or maybe use the DocumentContentHost. What ever floats your boat! Now, this behavior should support a couple of different scenarios. That about wraps it up. Wow, that’s a lot of stuff, but nothing a simple Behavior can’t solve for us. I would like to note that I am not concerned with removing objects from one collection and adding them to another collection in my ViewModel when dragging and dropping tabs around the xamDockManager. If you want that, you will have to add that functionality yourself. I am just going to provide you with the behavior, then talk about it. As you can see we have only a couple of properties. The HeaderMemberPath is used to specify the property path of the underlying object to use as the text for the tab header. The HeaderTemplate property is used to control the structure of the tab header. For example, maybe we want to add images or change other aspects of the tab header. Next, we have the ever popular ItemsSource property. You will use this property to data bind your collection of objects from your ViewModel to the TabGroupPane of the xamDockManager. Notice how I am using IList as the property type. This allows us to add and remove items from the collection. Lastly, we have the ItemTemplate property. This property will allow us to provide a DataTemplate to define the structure of each of the items in the collection. Now keep in mind, the ItemTemplate will only really work if all the objects in your collection are the same. If you have a collection of different object types, then you will not be using the ItemTemplate property. Feel free to add more properties and modify this behavior to fit your specific needs. So how do you use this behavior? Well let’s start with a ViewModel. This is a very straight forward ViewModel. It has a single collection and two commands. One command will insert a new Person object into my collection and the other command will remove an instance of a Person object from the collection. Speaking of the Person object, let’s take a look at it. Notice how I am not implementing INotifyPropertyChanged. This is only because this is meant to be sample code and not meant to replicate a production system. When you do this, make you’re your ViewModels and POCOs implement INotifyPropertyChanged. Next, let’s define our behavior in our View. Since we are using a Behavior, you need to add a reference to System.Windows.Interactivity to your WPF application and add an namespace in your XAML. Let’s run the application and see what we have so far. That’s cool and all, but our objects don’t really looks like views right now, and what’s up with the tab header? Let’s start using some of those properties we created to make this look a litle better. Let’s start with the header. I want to bind the tab header to the FullName property of our Person object. I also want to make some changes to the formatting of the tab header so I am going to create a new DataTemplate for it. I am also going to define an DataTemplate to use as the ItemTemplate for my Person object. Let’s see what we have with our changes. Now that’s better! You don’t have to use the ItemTemplate property. If you have a collection of objects, for example ObservableCollection<object>, you can provide an implicit DataTemplate to the various types you are adding to the collection. Maybe you have a DataTemplate for Person, and a different one for Car, and a different one for Pet. You can create a different DataTemplate for each of these types and they will be rendered properly for each corresponding tab. That’s wraps it up for this post. Keep in mind that this post is mainly to help guide you in implementing MVVM with the xamDockManager control and you will probably have to modify this code to meet your specific needs. Go ahead and download the source code for this post. Feel free contact me on my blog, connect with me on Twitter (@brianlagunas), or leave a comment below for any questions or comments you may have. Hi Brian, thank you for a such elegant solution! But, it seems like I've found potential bug in XamDockManager. To reproduce it just remove x:Name attribute from the TabGroupPane tag. Now, do the following: 1. Close default tab by using x. 2. Try to click on Insert menu item. What I see at the moment is that after last tab is removed it's not possible to add tabs anymore. When x:Name attribute is returned back everything is OK. I'm testing it on 13.1 (13.1.20131.1009) version. That's not a bug, it is by design. By giving the TabGroupPane an x:Name, it prevents it from being destroyed by the xamDockManager. If no x:Name is provided, the TabGroupPane will be removed from the xamDockManager and collected to reduce the memory footprint. Good to know. Is it written somewhere in documentation or is it standard WPF behavior? Because such things are not obvious. Well, it is not easily found as it is not talked about specifically. It is discussed a little in the Saving/Loading customizations, but even then you wouldn't know that it behaved that way. I agree, it is definitely not obvious. Hi Brian How do I set the newly created Tab as Active ? I would also need a way to indicate which tab is currently active. I have seen your post which explains use of IActiveAware but I am not using prism regions and using above approach to add new tabs. It's been awhile since I have written that code, but I would think that a simple call to contentPane.Activate(); right after AssociatedObject.Items.Add(contentPane); would do the trick. In order to track which tab is Active as you change tabs, you could simply create another behavior which tracks the active content pane, and then use your own interface to access that information in your ViewModels. Shouldn't be difficult. Awesome post! Very useful. Thanks! If I change your XamDockManager.Panes to DocumentContentHost, the HeaderTemplate is not used. What I want to do is put the close tab onto the tab instead of all the way to the right. How can I have the header template applied on those tab types? The way you have it currently set up I am able to put whatever I want in that HeaderTemplate and it works. The issue is the Tabbed Document type that is created when using a DocumentContentHost. Thanks! Try modifying the behavior to support this: Thank you for the response! I ended up just overriding the whole template because I need to support rename header in those tabs too, but I will take a look at that post and see if I can improve on what I have. In your behavior class in the method OnItemsSourcePropertyChanged, I get a null pointer exception when calling AssociatedObject.Items.Clear(). The ItemCollection HAS items before the call, but it still encounters the null pointer exception. Do you have any ideas on this? I can share my project with you if it would help. Please Login or Register to add a comment.
http://www.infragistics.com/community/blogs/blagunas/archive/2013/09/24/xamdockmanager-data-binding-contentpanes-with-mvvm.aspx
CC-MAIN-2015-11
refinedweb
1,476
65.73
#include <outputtype.h> #include <script/script.h> #include <script/sign.h> #include <script/signingprovider.h> #include <optional> #include <vector> Go to the source code of this file. Definition at line 16 of file descriptor.h. Get the checksum for a descriptor. Definition at line 1182 of file descriptor.cpp. Find a descriptor for the specified script, using information from provider where possible. A non-ranged descriptor which only generates the specified script will be returned in all circumstances. For public keys with key origin information, this information will be preserved in the returned descriptor. scriptis present in provider, a descriptor will be returned which is IsSolvable() and encapsulates said information. scriptcorresponds to a known address type, an "addr()" descriptor will be returned (which is not IsSolvable()). Definition at line 1191 of file descriptor.cpp. Parse a descriptor string. Included private keys are put in out. If the descriptor has a checksum, it must be valid. If require_checksum is set, the checksum is mandatory - otherwise it is optional. If a parse error occurs, or the checksum is missing/invalid, or anything else is wrong, nullptr is returned. Definition at line 1173 of file descriptor.cpp.
https://doxygen.bitcoincore.org/descriptor_8h.html
CC-MAIN-2021-17
refinedweb
194
53.37